corpusid
int64 110
268M
| title
stringlengths 0
8.56k
| abstract
stringlengths 0
18.4k
| citations
sequencelengths 0
142
| full_paper
stringlengths 0
635k
|
---|---|---|---|---|
44,077,235 | NLPRL-IITBHU at SemEval-2018 Task 3: Combining Linguistic Features and Emoji Pre-trained CNN for Irony Detection in Tweets | This paper describes our participation in Se-mEval 2018 Task 3 on Irony Detection in Tweets. We combine linguistic features with pre-trained activations of a neural network. The CNN is trained on the emoji prediction task. We combine the two feature sets and feed them into an XGBoost Classifier for classification. Subtask-A involves classification of tweets into ironic and non-ironic instances, whereas Subtask-B involves classification of tweets into non-ironic, verbal irony, situational irony or other verbal irony. It is observed that combining features from these two different feature spaces improves our system results. We leverage the SMOTE algorithm to handle the problem of class imbalance in Subtask-B. Our final model achieves an F1-score of 0.65 and 0.47 on Subtask-A and Subtask-B respectively. Our system ranks 4 th on both tasks, respectively, outperforming the baseline by 6% on Subtask-A and 14% on Subtask-B. | [
44145664,
1957433,
15646625,
14728943
] | NLPRL-IITBHU at SemEval-2018 Task 3: Combining Linguistic Features and Emoji Pre-trained CNN for Irony Detection in Tweets
June 5-6. 2018
Harsh Rangwani harsh.rangwani.cse15@iitbhu.ac.in
Indian Institute of Technology (Banaras Hindu University
VaranasiIndia
Devang Kulshreshtha devang.kulshreshtha.cse14@iitbhu.ac.in
Indian Institute of Technology (Banaras Hindu University
VaranasiIndia
Anil Kumar Singh
Indian Institute of Technology (Banaras Hindu University
VaranasiIndia
NLPRL-IITBHU at SemEval-2018 Task 3: Combining Linguistic Features and Emoji Pre-trained CNN for Irony Detection in Tweets
Proceedings of the 12th International Workshop on Semantic Evaluation (SemEval-2018)
the 12th International Workshop on Semantic Evaluation (SemEval-2018)New Orleans, LouisianaJune 5-6. 2018
This paper describes our participation in Se-mEval 2018 Task 3 on Irony Detection in Tweets. We combine linguistic features with pre-trained activations of a neural network. The CNN is trained on the emoji prediction task. We combine the two feature sets and feed them into an XGBoost Classifier for classification. Subtask-A involves classification of tweets into ironic and non-ironic instances, whereas Subtask-B involves classification of tweets into non-ironic, verbal irony, situational irony or other verbal irony. It is observed that combining features from these two different feature spaces improves our system results. We leverage the SMOTE algorithm to handle the problem of class imbalance in Subtask-B. Our final model achieves an F1-score of 0.65 and 0.47 on Subtask-A and Subtask-B respectively. Our system ranks 4 th on both tasks, respectively, outperforming the baseline by 6% on Subtask-A and 14% on Subtask-B.
Introduction
According to the Merriam-Webster dictionary 1 , one of the meanings of irony is defined as 'the use of words to express something other than and especially the opposite of the literal meaning' (e.g. I love getting spam emails.). Irony can have different forms, such as verbal, situational, dramatic etc. Sarcasm is also categorized as a form of verbal irony. Various attempts have been made in the past for detection of sarcasm (Joshi et al., 2017). Sarcastic texts are characterized by the presence of humor and ridicule, which are not always present in the case of ironic texts (Kreuz and Glucksberg, 1989). The absence of these characteristics makes automatic irony detection a more difficult problem than sarcasm detection.
Irony detection is a problem that is important for the working of many Natural Language Understanding Systems. For example, people often use irony to express their opinions on social media like Twitter (Buschmeier et al., 2014). Detecting irony in social texts can aid in improving opinion analysis.
The SemEval 2018 task 3 (Van Hee et al., 2018) consists of two subtasks. Subtask-A involves predicting whether a tweet is ironic or not and Subtask-B involves categorizing a tweet into Non-Ironic, Verbal Irony (by means of a polarity contrast), Situational Irony and Other Forms of Verbal Irony. The task organizers use macro averaged F1, rather than accuracy to force systems to optimize to work well on all the four classes of tweets, as described in Section 3.1.
Systems built in the past primarily used handcrafted linguistic features for classification of ironic texts (Buschmeier et al. 2014;Farías et al. 2016). In our system, we try to combine them with the pre-trained activations of a neural network. Our results show that both types of features complement each other, as the results produced by the combination of them surpass the results of using either the linguistic or the pre-trained activation features individually by a large margin. We use XGBoost Classifier (Chen and Guestrin, 2016), as it performs at par with neural networks when the provided training data is of small size.
Our results indicate that oversampling techniques like SMOTE (Chawla et al., 2002) can also be used to oversample the representations generated using neural networks to improve performance on imbalanced datasets.
The rest of the paper is organized as follows: Section 2 gives a detailed description of how our system was built, Section 3 then describes the experimental setup and the results obtained and Section 4 concludes the paper.
638
Proposed Approach
For modeling irony in tweets, our system makes use of a combination of features. These features can be classified into two broad groups:
• Linguistic (Structure and User Behavior)
• Pre-trained Activations of a Neural Network.
These features were concatenated and the XG-Boost classifier (Chen and Guestrin, 2016) was used to perform the classification.
For subtask B, to counter the imbalance in the dataset, which might lead classifiers to favor the majority class in classification, we used SMOTE for oversampling the data (Chawla et al., 2002). Then we used XGBoost Classifier again for classification into various classes.
The details of the classifier parameters are provided in Section 2.2. Basic preprocessing of tweets was performed before feature extraction, which involved removing hash symbols ('#'), converting contractions ('doesn't' to 'does not'), removing links and quotations and normalizing the text into lower case. We will explicitly mention those features whose extraction require the original tweets.
Feature Extraction
Our system generates a 72-dimensional handcrafted feature vector, based on the linguistic structure and user behavior. We then combine this with a 2304 dimensional feature vector generated using activations of a pre-trained CNN. The combined features are categorized into 11 broad classes:
Contrast Based Features: Contrast of sentiments is a feature that has been observed in sarcastic and ironic texts (Rajadesingan et al., 2015), e.g. I love being ignored #not. For capturing contrast, we use the affect score of lemmas (Warriner et al., 2013) and the sentiment score of words based on SentiStrength (Thelwall et al., 2010). The final feature vector consists of:
• The difference between the highest and lowest sentiment values of the words present in the tweet.
(1 feature)
• The difference between the highest and lowest affect scores of the words present in the tweet.
(1 feature)
• Longest unimodal sequence size and the number of transitions of sentiment polarity.
(2 features)
• Sum of sentiment scores and counts of positive and negative n-grams. (4 features)
Readability Based Features: Ironical texts are usually complex, and hence we use the total number of syllables in the tweet, along with number of words that contain polysyllables as features. According to Automated Readability Index (Senter and Smith, 1967), the standard deviation, the average and the median of the word length serve as indicators of the complexity of the text (Rajadesingan et al., 2015).
Incongruity of Context: Ironic similes are common in literature (e.g. as clear as mud in which both clear and mud are sentiment neutral words.). Due to this neutrality, the lexicon based methods are unable to capture the incongruity present. Therefore, maximum and minimum GloVe (Pennington et al., 2014) cosine similarity between any two words in a tweet are used as features in our system (Joshi et al., 2016).
Repetition-based Features: Users often change their writing style to depict sarcasm and irony, which is analogous to the change of tone in speech while expressing sarcasm, e.g. Loooovvveeeeeee when my phone gets wiped. We use the count of of words with repetitive characters and the count of 'senti words' (sentiment score ≥ 2 and sentiment score ≤ -2) with repetitive characters as our features (Rajadesingan et al., 2015).
Punctuation-based Features: Punctuation counts can sometimes serve as an indicator of ironic texts (Kreuz and Caucci, 2007). We use the counts of characters like hashtag (#), ellipsis (...), exclamation mark (!), question mark (?), colon (:), quote (") and apostrophe (') in a tweet as features.
Presence of Markers: Discourse markers are certain words that help in expressing ideas and performing specific functions (Farías et al., 2016). Our system uses a curated list of discourse markers. Similar to the list of discourse markers, we also use a list of intensifiers (e.g. heck ), laughter words (e.g. lmao, lol etc.), interjections (e.g. oops) and swear words (e.g. shit) as their appearance in a tweet indicates the presence of unexpectedness, which can, in turn, serve as an indicator of irony. We use counts of these different types of words separately as features.
Word Count Features: According to (2016), ironic tweets depict their content in fewer words compared to normal tweets. Hence we use the word count of tweets as a feature. Apart from the word count, (Kreuz and Caucci, 2007) suggest that the counts of adjectives and adverbs can also be used as markers of ironic content. We also use the preposition count as a separate feature.
Semantic Similarity: Ironic tweets that span multiple lines are often found to have lines that are very much semantically dissimilar to each other (Farías et al., 2016). We use the WordNet based similarity function (Mihalcea et al., 2006) available online 2 to obtain a similarity score, which is used as a feature.
Polarity and Subjectivity: Ironic texts are usually subjective and often convey something negative (or positive) about the target (Wallace et al., 2015). We use the Polarity and Subjectivity Scores (Sentiment Score) generated using TextBlob as features in our model (Loria et al., 2014).
URL Counts: We observed in the training set that users often used irony to express their opinion about online content, e.g. blogs, images, tweets, etc. For specifying the context of a comment (tweet), they often add a URL to the original content. So we used the counts of URLs in a tweet as a feature. Our system requires raw tweets for extracting this feature.
Apart from the above features, we also experimented with Named Entity Count and occurrence of popular hashtags like (#hypocrisy), using a curated list, as our features (Van Hee, 2017).
Pre-trained CNN Features
Apart from extracting linguistic features from tweets, we leverage the activations of a Convolutional Neural Network (CNN) pre-trained on emoji prediction task. We use DeepMoji 3 (Felbo et al., 2017), a model trained on 1.2 billion tweets with emojis, and tested on eight benchmark datasets within sentiment, emotion and sarcasm detection. Since sarcasm is a form of verbal irony that expresses ridicule or contempt (Long and Graesser, 1988), we believe transferring the knowledge of CNN trained on sarcasm can improve the results of Irony Detection task.
Each tweet is converted into a 2304dimensional feature vector by feeding into 2 http://nlpforhackers.io/ wordnet-sentence-similarity/ 3 https://github.com/bfelbo/DeepMoji DeepMoji-CNN and extracting activations of the last hidden layer.
Classifiers
We construct XGBoost (Chen and Guestrin, 2016) feature-based classifiers for irony detection using the above features. Based on the 10-fold cross validation performance, the best performing parameters prove to be the default parameters used by the XGBoost Classifier Package 4 .
Handling Class Imbalance
The data provided for subtask-B is highly skewed.
To perform well on every class of irony, we used an oversampling technique (SMOTE (Chawla et al., 2002)). In SMOTE, for generating a new synthetic sample, from the k-nearest neighbors of an instance, one is chosen at random, and a new sample is generated on the line joining the instance and the chosen neighbor. We use the SMOTE implementation available in imblearn (Lemaître et al., 2017) package for our system, with kneighbors equal to 5.
Experiments and Evaluation
Dataset and Metrics
The annotated tweet corpus provided for training consists of 1390 instances of Verbal Irony due to polarity contrast, 205 instances of Other Types of Verbal Irony, 316 Situational Ironic instances, and 1923 Non Ironic instances. Our system only uses the training data provided by the organizers and no other annotated data is used (Constrained System). The test dataset for Subtask-A contains 473 non-ironical tweets and 311 ironical tweets. For Subtask-B, the 311 ironical tweets are further classified into Verbal Irony by means of Polarity Contrast (164), Situational Irony (85) and Other Forms Of Verbal Irony (62).
The evaluation metric used for ranking teams in Sub-task A is the F1 score of the positive (Ironic) class whereas in Subtask-B, the organizers use macro averaged F1 (average of F1 for each class) as an evaluation metric for ranking teams.
Results and Discussion
We present the results achieved by our approaches, as well as the combination of our methods in Ta • Our submitted models achieve 4th position in public leaderboard 5 on both Task-A and Task-B and beat the task baselines by about 6% and 14%, respectively, on both tasks on the test set.
• Leveraging DeepMoji model for Irony detection domain yields a considerable improvement over purely linguistic features (0.03 and 0.12). This is because the model is trained on over a billion tweets on sarcasm and four other domains. As stated earlier, sarcasm is a verbal form of irony (Long and Graesser, 1988), and transfer learning works as domains are quite similar.
• Our combination of linguistic features with pre-trained CNN achieves an F-score of 0.65 and 0.42, with an improvement of at least 0.03 on Task-A and significant improvement in Task-B, compared to linguistic features. The higher accuracy points to the power of ensemble learning by combining different feature spaces, as both feature sets specialize in different types of tweets.
• The use of SMOTE oversampling technique leads to an F-score of 0.47 in Task-B, which is an improvement of 0.05 over (Linguistic + Pretrained CNN) model.
• The improvement in scores due to linguistic features are not as pronounced in Subtask-B, as compared to Subtask-A. One of the possible reasons for this is that linguistic features are not able to capture the fine grained differences between different forms of irony.
ble 1. Our final submitted systems are: (LinguisticTable 1: F1 scores in Task A and Macro F1 in Task B on test set. + Pretrained CNN) for Task-A and (Linguistic + Pretrained CNN + SMOTE) for Task-B. We discuss the major takeaways from the results below.Approach
Task-A
Task-B
Precision Recall F1-score Precision Recall F1-score
Linguistic
0.48
0.78
0.59
0.32
0.36
0.30
Pretrained CNN
0.60
0.63
0.62
0.51
0.44
0.42
Linguistic + Pretrained CNN
0.55
0.79
0.65
0.53
0.44
0.42
Linguistic + Pretrained CNN + SMOTE
-
-
-
0.46
0.51
0.47
Baseline (Linear SVC over BoW)
0.56
0.63
0.59
0.48
0.36
0.33
https://www.merriam-webster.com/ dictionary/irony
http://xgboost.readthedocs.io/en/ latest/parameter.html
https://competitions.codalab.org/ competitions/17468#results
ConclusionWe reported the use of handcrafted features and pre-trained CNN activations for predicting the irony in tweets. We implemented a variety of features based on user behavior as well as the linguistic structure in a tweet. We further exploit the SMOTE oversampling technique to handle the class imbalance problem in Subtask-B, which involves categorizing a tweet into Non Ironic, Verbal Irony, Situational Irony and Other Verbal Irony. We then feed the features into XGBoost classifier for both the tasks. The benefit of using CNN models pre-trained on sarcasm, sentiment, and emotion domains can be clearly seen, yielding an improvement of 3% and 9% over task baselines. Our final submitted system stood 4 th in both the subtasks in the SemEval 2018 shared task on "Irony Detection in English Tweets".
An impact analysis of features in a classification approach to irony detection in product reviews. Konstantin Buschmeier, Philipp Cimiano, Roman Klinger, Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media Analysis. the 5th Workshop on Computational Approaches to Subjectivity, Sentiment and Social Media AnalysisKonstantin Buschmeier, Philipp Cimiano, and Roman Klinger. 2014. An impact analysis of features in a classification approach to irony detection in prod- uct reviews. In Proceedings of the 5th Workshop on Computational Approaches to Subjectivity, Sen- timent and Social Media Analysis, pages 42-49.
Smote: synthetic minority over-sampling technique. V Nitesh, Kevin W Chawla, Lawrence O Bowyer, W Philip Hall, Kegelmeyer, Journal of artificial intelligence research. 16Nitesh V Chawla, Kevin W Bowyer, Lawrence O Hall, and W Philip Kegelmeyer. 2002. Smote: synthetic minority over-sampling technique. Journal of artifi- cial intelligence research, 16:321-357.
Xgboost: A scalable tree boosting system. Tianqi Chen, Carlos Guestrin, Proceedings of the 22nd acm sigkdd international conference on knowledge discovery and data mining. the 22nd acm sigkdd international conference on knowledge discovery and data miningACMTianqi Chen and Carlos Guestrin. 2016. Xgboost: A scalable tree boosting system. In Proceedings of the 22nd acm sigkdd international conference on knowl- edge discovery and data mining, pages 785-794. ACM.
Irony detection in twitter: The role of affective content. Delia Irazú Hernańdez Farías, Viviana Patti, Paolo Rosso, ACM Transactions on Internet Technology (TOIT). 16319Delia Irazú Hernańdez Farías, Viviana Patti, and Paolo Rosso. 2016. Irony detection in twitter: The role of affective content. ACM Transactions on Internet Technology (TOIT), 16(3):19.
Using millions of emoji occurrences to learn any-domain representations for detecting sentiment, emotion and sarcasm. Bjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, Sune Lehmann, arXiv:1708.00524arXiv preprintBjarke Felbo, Alan Mislove, Anders Søgaard, Iyad Rahwan, and Sune Lehmann. 2017. Using millions of emoji occurrences to learn any-domain represen- tations for detecting sentiment, emotion and sar- casm. arXiv preprint arXiv:1708.00524.
Automatic sarcasm detection: A survey. Aditya Joshi, Pushpak Bhattacharyya, Mark J Car, ACM Computing Surveys (CSUR). 50573Aditya Joshi, Pushpak Bhattacharyya, and Mark J Car- man. 2017. Automatic sarcasm detection: A survey. ACM Computing Surveys (CSUR), 50(5):73.
Are word embedding-based features useful for sarcasm detection?. Aditya Joshi, Vaibhav Tripathi, Kevin Patel, Pushpak Bhattacharyya, Mark Carman, arXiv:1610.00883arXiv preprintAditya Joshi, Vaibhav Tripathi, Kevin Patel, Pushpak Bhattacharyya, and Mark Carman. 2016. Are word embedding-based features useful for sarcasm detec- tion? arXiv preprint arXiv:1610.00883.
Lexical influences on the perception of sarcasm. J Roger, Gina M Kreuz, Caucci, Proceedings of the Workshop on computational approaches to Figurative Language. the Workshop on computational approaches to Figurative LanguageAssociation for Computational LinguisticsRoger J Kreuz and Gina M Caucci. 2007. Lexical in- fluences on the perception of sarcasm. In Proceed- ings of the Workshop on computational approaches to Figurative Language, pages 1-4. Association for Computational Linguistics.
How to be sarcastic: The echoic reminder theory of verbal irony. J Roger, Sam Kreuz, Glucksberg, Journal of experimental psychology: General. 1184374Roger J Kreuz and Sam Glucksberg. 1989. How to be sarcastic: The echoic reminder theory of verbal irony. Journal of experimental psychology: Gen- eral, 118(4):374.
Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. Guillaume Lemaître, Fernando Nogueira, Christos K Aridas, Journal of Machine Learning Research. 1817Guillaume Lemaître, Fernando Nogueira, and Chris- tos K. Aridas. 2017. Imbalanced-learn: A python toolbox to tackle the curse of imbalanced datasets in machine learning. Journal of Machine Learning Research, 18(17):1-5.
Wit and humor in discourse processing. Discourse processes. L Debra, Arthur C Long, Graesser, 11Debra L Long and Arthur C Graesser. 1988. Wit and humor in discourse processing. Discourse pro- cesses, 11(1):35-60.
Textblob: simplified text processing. Steven Loria, Keen, Honnibal, Yankovsky, Karesh, Dempsey, Secondary TextBlob: Simplified Text Processing. Steven Loria, P Keen, M Honnibal, R Yankovsky, D Karesh, E Dempsey, et al. 2014. Textblob: simpli- fied text processing. Secondary TextBlob: Simplified Text Processing.
Corpus-based and knowledge-based measures of text semantic similarity. Rada Mihalcea, Courtney Corley, Carlo Strapparava, AAAI. 6Rada Mihalcea, Courtney Corley, Carlo Strapparava, et al. 2006. Corpus-based and knowledge-based measures of text semantic similarity. In AAAI, vol- ume 6, pages 775-780.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 confer- ence on empirical methods in natural language pro- cessing (EMNLP), pages 1532-1543.
Sarcasm detection on twitter: A behavioral modeling approach. Ashwin Rajadesingan, Reza Zafarani, Huan Liu, Proceedings of the Eighth ACM International Conference on Web Search and Data Mining. the Eighth ACM International Conference on Web Search and Data MiningACMAshwin Rajadesingan, Reza Zafarani, and Huan Liu. 2015. Sarcasm detection on twitter: A behavioral modeling approach. In Proceedings of the Eighth ACM International Conference on Web Search and Data Mining, pages 97-106. ACM.
Automated readability index. R J Senter, A Edgar, Smith, CINCINNATI UNIV OHTechnical reportRJ Senter and Edgar A Smith. 1967. Automated readability index. Technical report, CINCINNATI UNIV OH.
Sentiment strength detection in short informal text. Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, Arvid Kappas, Journal of the Association for Information Science and Technology. 6112Mike Thelwall, Kevan Buckley, Georgios Paltoglou, Di Cai, and Arvid Kappas. 2010. Sentiment strength detection in short informal text. Journal of the As- sociation for Information Science and Technology, 61(12):2544-2558.
Can machines sense irony? : exploring automatic irony detection on social media. Cynthia Van Hee, Ghent UniversityPh.D. thesisCynthia Van Hee. 2017. Can machines sense irony? : exploring automatic irony detection on social media. Ph.D. thesis, Ghent University.
SemEval-2018 Task 3: Irony Detection in English Tweets. Cynthia Van Hee, Els Lefever, Véronique Hoste, Proceedings of the 12th International Workshop on Semantic Evaluation. the 12th International Workshop on Semantic EvaluationNew Orleans, LA, USAAssociation for Computational LinguisticsCynthia Van Hee, Els Lefever, and Véronique Hoste. 2018. SemEval-2018 Task 3: Irony Detection in English Tweets. In Proceedings of the 12th Interna- tional Workshop on Semantic Evaluation, SemEval- 2018, New Orleans, LA, USA. Association for Computational Linguistics.
Sparse, contextually informed models for irony detection: Exploiting user communities, entities and sentiment. C Byron, Eugene Wallace, Charniak, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing1Byron C Wallace, Eugene Charniak, et al. 2015. Sparse, contextually informed models for irony de- tection: Exploiting user communities, entities and sentiment. In Proceedings of the 53rd Annual Meet- ing of the Association for Computational Linguistics and the 7th International Joint Conference on Natu- ral Language Processing (Volume 1: Long Papers), volume 1, pages 1035-1044.
Norms of valence, arousal, and dominance for 13,915 english lemmas. Behavior research methods. Amy Beth Warriner, Victor Kuperman, Marc Brysbaert, 45Amy Beth Warriner, Victor Kuperman, and Marc Brys- baert. 2013. Norms of valence, arousal, and dom- inance for 13,915 english lemmas. Behavior re- search methods, 45(4):1191-1207. |
218,973,873 | [] | The Effect of Linguistic Parameters in Cross-Language Information Retrieval Performance Evidence from IARPA's MATERIAL Program
May 2020
Carl Rubino
Intelligence Advanced Research Projects Activity (IARPA) Washington
20511DCUSA
Carl Rubino@iarpa
Intelligence Advanced Research Projects Activity (IARPA) Washington
20511DCUSA
Gov
Intelligence Advanced Research Projects Activity (IARPA) Washington
20511DCUSA
The Effect of Linguistic Parameters in Cross-Language Information Retrieval Performance Evidence from IARPA's MATERIAL Program
Proceedings of the Cross-Language Search and Summarization of Text and Speech Workshop
the Cross-Language Search and Summarization of Text and Speech WorkshopMarseilleMay 20201linguistic evaluationcross-language information retrievallinguistic parameterslanguage typologyNLP program design
In IARPA's MATERIAL program, choosing languages and acquiring corpora to develop an effective End-to-End Cross-Language Information Retrieval (CLIR) system for speech and text, and component technologies thereof, was strategically planned to enable language-independent methods for CLIR development and evaluation. It was believed that a typologically diverse set of languages, coupled with a heterogeneous evaluation condition would stimulate participating research teams to construct engines that would be usable in diverse environments and responsive to changing data conditions. This paper will detail how the MATERIAL program investigated certain linguistic parameters to guide the language choice, data collection and partitioning, and better understand evaluation results.
Introduction
IARPA's Machine Translation for English Retrieval of Information in Any Language (MATERIAL) program was launched in 2017 to stimulate research on a wide array of human language technologies optimized to support cross-language information retrieval and summarization. Four multinational teams (led by Columbia University, Johns Hopkins University, Raytheon BBN and USC-ISI), chosen via competitive selection, were tasked to build End-to-End CLIR systems capable of retrieving in a fully automated way, foreign language speech and text documents responsive to a new typology of English queries, and provide evidence or relevance, in English, of the retrieved documents for human consumption (Rubino, 2017).
Prior to the 2017 kickoff of the program, nearly two years were devoted to negotiating the data collection, guided by the program's strategic evaluation methodology. This included separate training and testing conditions for both speech and text, a diverse set of languages to explore, and challenging development time frames that decreased as the program progressed.
IARPA collaborated with its Test and Evaluation (T&E) partners at the University of Maryland's Center for Advanced Study of Language (CASL), NSA's Center for Applied Machine Translation (CAMT), and MIT-Lincoln Laboratories to choose an optimal mix of diverse languages which would be incrementally released to the performing teams to stimulate and measure progress across three program periods. Two factors were most critical in initially determining the language choice: typological diversity, measured by divergent phonological, morphological and syntactic properties, and resource availability. To allow for the program's mismatch between the training and testing conditions and the requirement to identify domains without additional source language training, the languages eventually collected and annotated also had to have a substantial presence on the web. This would enable the performing teams to harvest relevant data to complement the small training sets provided by IARPA to seed the CLIR system development. Web harvesting was crucial to the program to improve the performance of applications against genres not represented in the training data, e.g. for speech, all the training data were conversational telephony, but the evaluation condition included broadcast news (Rubino, 2019). IARPA followed a strict language release schedule, not divulging the language identities until the start of each relevant development phase. This ensured that progress could be measured temporally and consistently between teams. As of May 2020, six languages were provided. Listed in order of release, these were: Tagalog (TGL), Swahili (SWA), Somali (SOM), Bulgarian (BUL), Lithuanian (LIT) and Pashto (PUS).
The Metrics
It was important to IARPA to evaluate the systems on a meaningful task-based measure. The primary performance measure used to assess the CLIR aspect of performer systems was a novel detection metric, related to the keyword spotting metric Actual Term Weight Value (AT W V ) used in the IARPA Babel program (Fiscus et al., 2007). The MA-TERIAL metric, Actual Query Weighted Value (AQW V ), expresses an average of all Query Values for a system operating under its actual decision threshold. This allowed for all queries to be equally treated regardless of the number of documents annotated as relevant to them in the ground truth. Query Value (QV ) is defined as:
QV = 1 − P M iss − β × P F A(1)
where P M iss is the probability that a relevant document for the query will not be detected (a miss against the ground truth), and P F A is the probability that a non-relevant document will be incorrectly detected (a false alarm against the ground truth). The parameter β allowed for the relative weighting of misses and false alarms. It was derived from the following formula:
β = C V × ( 1 P Rel − 1) (2)
where C is the cost of an incorrect detection, V is the value of a correct detection, and P Rel is the prior probability that a document is relevant to the query. This value changed under different conditions but will remain constant for all data described herein. A perfect system that returned all relevant documents without false alarms would receive a score of 1. A system that did not return anything would receive a score of 0. If all the documents a system detected were false alarms, the score would be -β.
IARPA also provided roughly six hundred translated and transcribed documents, released as an Analysis Set, to allow the teams to measure component progress in speech recognition and machine translation (MT) using traditional metrics Word Error Rate (WER) and BLEU, respectively.
Linguistic Parameters Measured
Building CLIR systems capable of addressing both speech and text entails creating multiple component technologies, then learning how to optimally integrate them for information retrieval. Since a primary purpose of the MATERIAL program was to inspire novel research in both speech and translation, presumed challenges stemming from linguistic complexities and language anomalies were actively sought out by the T&E team as a means to advance research appropriately.
From a linguistic perspective, a number of parameters that could potentially affect system performance may immediately come to mind, to include both typological features of the languages such as phonetic inventory, morphological complexity, and word order, to sociolinguistic features to include dialectology, script standardization, literacy and diglossia. MATERIAL's T&E Team collected linguistic statistics on the candidate languages, focusing on features that were assumed to have a higher chance of correlation with Natural Language Processing (NLP) performance. For a sample of these kinds of linguistic variables, selected parameter values from the World Atlas of Language Structures (WALS) for the MATERIAL languages released so far are given in Table 1 with their numeric WALS Feature value (Dryer and Haspelmath, 2013). Parameters considered to be more challenging for NLP applications in the table are shown in bold.
For some linguistic features, typological resources do exist that enable us to quantify differences between or across languages. The URIEL knowledge base and its lang2vec utility, for example, provide vector identifications of languages measured from a variety of parameters taken from typological, geographical and phylogenetic databases to aid in NLP correlational analysis (Littell et al., 2017). Using lang2vec, vectors representing multiple syntactic features (often binary), manually drawn from WALS, and the Syntactic Structures of the World Languages (Collins and Kayne, 2011) can be compared across languages to compute a relative distance between any set of languages for an available amalgamation of categories. While such vector values may appear to be helpful in differentiating languages by their features, some caveats should be noted. First, no weighting mechanism is introduced to calculate the vector; all categories, regardless of their potential effect on NLP applications are treated equally. Furthermore, not all languages in the collection are represented equally for all the typological dimensions measured. Some features, in fact, were predicted from typological inference and genetic relationships. Nevertheless, we felt a conglomerate distance measure derived from a wide variety of linguistic categories was worth investigating. Table 2 exemplifies the lang2vec tool's distance calculations between English and the MATERIAL languages for four dimensions: phonological features, syntactic features, a compound value of the product of phonological and syntactic distance, and phonetic inventory.
Because Automatic Speech Recognition (ASR) was an integral part of the program, the T&E Team paid considerable attention to phonological features and phonetic inventories of the languages they chose to roll out. Multiple resources were available to capture phonetic and phonological properties, then relay them to the performing teams with each language via a document entitled "Language Specific Design Document", jointly authored by CASL and the data collector Appen Butler Hill. To contrast the specific MATERIAL languages for this paper, we counted three inventories as shown in Table 3: the number of consonants, number of vowels, and the number of segments (composed of the number of consonants, vowels and tones). These measures were extracted from the Phoible database which provides online search through an intuitive interface (Moran and McCloy, 2019). Because no single database provides complete coverage of the languages for which phonetic inventories have been documented, Phoible contains multiple databases that often conflict with each other in their counts. Where differing counts in the Phoible database were encountered, the values cited in the UCLA Phonological Segment Inventory Database took precedence, followed by the Stanford Phonology Archive.
The Baseline Systems
To relate the linguistic features to current program progress, we will introduce results for several baseline systems contributing to the CLIR pipeline, as well as the CLIR system itself. These rudimentary systems were produced with minimal training data, often just the program build pack and other noted, publicly available low-hanging-fruit resources. Development for the program parameters was also minimal. The ASR baselines reported involve a CNN Long Short-Term Memory Network (CNN-LSTM) system trained on MATERIAL Audio Build data and 1500 hours from several languages, including languages released in the Babel program, English and Arabic. The CNN-LSTM+ model cited also includes an expanded model and lexicon generated from a web text harvest and lexicon which significantly decreased the Out-of-Vocabulary (OOV) rate and improved WER scores.
The CLIR baselines detailed in Table 5 reflect the AQWV results from the MATERIAL Analysis Set, with separate numbers provided for retrieval on text vs. speech, presented as Text / Speech. For the first three languages of the program, Tagalog, Swahili and Somali, the low resource conditions were augmented with a web harvest that include Panlex and data from DARPA's LORELEI program. These additional resources were incrementally included in the CLIR systems for Lithuanian, Bulgarian, and were not present in Pashto.
Correlates of Performance
Because ASR systems for the MATERIAL languages were trained with multilingual features without regard to English, we initially only investigated what we considered to be potential correlations between the syntactic vectors with two program tasks that would require English language transfer: machine translation (via BLEU) and CLIR (via AQWV). We found no strong correlation between the English syntactic distance vectors and the MT task measured by BLEU (NMT r(4) = −.09, PBMT r(4) = −.22), see Figure 1, or the CLIR Task measured by AQWV (Text r(4) = .03, Speech r(4) = .20). A number of reasons can be postulated for why no correlation would exist between CLIR scores and English distance scores, such as highly diverse datasets measured for information retrieval per language, non-uniform averaged relevance probabilities for the query sets built for each language, and varying degrees of complexity between the query sets used to evaluate each language. While the number of queries released per language was relatively uniform, the composition of query types was not. More detailed descriptions of the query typology and datasets can be found in the MATERIAL Evaluation plan here: https://bit.ly/39cNGoo.
Surprisingly, when we compared MT performance to phonological distance, we found a strong negative correlation with NMT BLEU r(4) = −.93, p = .008; but not To compare MT performance with a more intuitive measure, we calculated a new compound linguistic measure, the product of syntactic and phonological distance, where the negative correlation with NMT and PBMT is more apparent and significant, r(4) = −.95, p = .004. See Table 2. Not surprisingly, exploring the segment counts detailed in Table 3 to compare with a baseline CNN-LSTM monolingually trained engine yielded no evidence of correlation, r(4) − .24, p = .642 (Figure 3). Even less surprising was the observation that the Inventory Distance vector from English and ASR performance on the CNN-LSTM system were also not correlated, r(4) = −.53, p = .281. Much diversity was present in the program's speech data. The audio used for evaluation was somewhat consistent for genre distribution and sampling rates between languages but not for recording quality, or other critical factors such as the amount of data with music, dialect diversity in the collection or the number of speakers recorded.
Categorizing languages with absolute features can be intriguing theoretically, but most advantageous to the performers and the T&E team were quantifiable measures derived from program corpora. One way, for instance, of projecting possible lexical coverage problems would be to calculate OOV rates existing between development and test partitions of the IARPA released training data. Languages with higher OOV rates may presumably have lexical gaps in text and possibly, transcription anomalies in speech. For seeding machine translation development, IARPA provided training data for each language consisting of sentence-aligned bitexts from multiple news sources. To maximize diversity of the rather homogeneous collection, no more than five sentences were taken from the same article. words/English words) and unique word ratios (unique source words/all source words). We investigated the unique word ratio as a potential correlate for vocabulary growth. Higher ratios indicating larger vocabulary expansion may derive from a variety of factors, such as lack of orthographic standards, segmentation anomalies, or increased morphological complexity. There was a weak negative correlation between the NMT Multilingual BLEU result and the unique word ratio, r(4) = .73, p = .101.
Comparing baseline BLEU scores against the unique word ratios at the bitext size of 800K foreign language words offered slight evidence of correlation for NMT r(4) = −.73, p = .101 but not for PBMT performance, BLEU r(4) = −.48, p = .339. Likewise, no correlation was found between BLEU scores and vocabulary size in a smaller speech dataset of 80K words shown in Table 8, PBMT r(4) = .06, t = .911, NMT r(4) = .36, t = .489.
Lang
Conclusion
From the IARPA MATERIAL experience, choosing languages by linguistic parameters helps to ensure parametric diversity, critical to our ability to develop languageindependent CLIR solutions in low resource conditions, a fundamental question posed by the program. Certain typological parameters we may assume to be tightly linked to CLIR results often have no correlation with the actual performance of the NLP applications to which the parameters would seem intuitively relevant. Discerning which linguistic parameters correlated with overall performance enabled IARPA to evaluate CLIR progress when different languages were measured. Some parameters were also a significant factor for Performing Teams to determine the most effective CLIR pipeline design, customized to handle language-specific properties deemed necessary to address. These pipelines, as well as data collection and use strategies, differed between teams and languages, the details of which are beyond the scope of this paper.
We have shown, albeit with a relatively small sample of diverse languages and only using immature baseline systems, that amalgamate typological distance vectors between the MATERIAL languages and English quite unexpectedly and counter-intuitively did correlate with MT BLEU scores, but not AQWV or WER measures.
We suggest that when choosing languages to design or evaluate an NLP research program, ample attention is paid to the language dimension as measured by the properties of the data used for both training, development and evaluation, as their correlation with performance is likely to exceed that of typological parameters presumed to be critical from a linguistic perspective.
Figure 1 :
1Syntactic Distance from English vs. BLEU against PBMT performance where r(4) = −.72, p = .106.
Figure 2 :
2Phono-syntactic distance with NMT BLEU.
Figure 3 :
3Segment Inventory vs. CNN-LSTM WER.
Figure 4 :
4ASR performance as correlated to text OOV.
Figure 5 :
5ASR performance as correlated to speech OOV.
Table 4
4reports the component technology
baselines in terms of BLEU (for machine translation) and
WER (for speech recognition) calculated for the MATE-
RIAL Analysis Set. For machine translation the following
baselines were reported: a phrase based statistical (PBMT)
system trained on the MATERIAL Build Pack augmented
Table 2 :
2Lang2Vec values for chosen linguistic attributes
(phonological, syntactic).
Lang. Seg-
ments
Conso-
nants
Vow-
els
Syllable
Structure
TGL
23
18
5
Moderately
Complex
SWA
36
31
5
Simple
SOM
32
22
10
Moderately
Complex
LIT
52
36
16
Complex
BUL
42
36
6
Complex
PUS
38
31
7
Complex
Table 3 :
3Phonetic Inventories from Phoible.with the Long Now Foundation's PanLex lexicon available
at panlex.org, and three neural MT (NMT) systems trained
on the MATERIAL Build Pack with PanLex (NMT),
with additional engines trained on additional in-language
data available from a web harvest (NMT-Mono), and a
third NMT engine that also includes training data from
additional, often related, languages (NMT-Multi).
Model
TGL SWA SOM LIT
BUL PUS
MT Baselines (BLEU)
PBMT 33.0
22.8
17.3
17.6
32.3
13.3
NMT
27.9
23.6
14.7
19.5
33.3
N/A
NMT-
Mono
N/A
N/A
N/A
29.8
43.1
12.6
NMT-
Multi
38.7
35.4
22.3
30.2
43.2
17.5
Speech Recognition Baselines (WER)
CNN-
LSTM
46.6
44.3
60.6
47.9
40.0
42.8
CNN-
LSTM+
33.9
33.7
49.4
23.4
21.3
39.9
Table 4 :
4MT and ASR Baselines.
Table 5 :
5CLIR Baselines in terms of AQWV (Text/Speech).
Table 6 :
6OOV rates calculated by training partition.The text OOV rates did indeed correlate with the per-
formance of the NMT engine trained with multilingual
data, perhaps as a function of the effectiveness of each
language's data harvest of differing sizes to lower the OOV
rates, r(3) = −.87, p = .005. Likewise, the LSTM+ ASR
engine performance correlates to the OOV rates observed
in speech, r(3) = .93, p = .022. See Figures 4 and 5.
Table 7
7provides the word counts for these
training corpora, along with translation ratios (foreign
Table 7 :
7MATERIAL MT Training Data Statistics.
Table 8 :
8Vocabulary statistics from the Speech Build packs.
Catherine Cotell, and the anonymous editors of this volume for their comments on a previous version of this paper. This work is supported by the Office of the Director of National Intelligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies, either expressed or implied, of ODNI, IARPA, or the U.S. Government. The U.S. fruitful discussions on linguistics and evaluation. I am also grateful to Audrey Tong. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstanding any copyright annotation thereinfruitful discussions on linguistics and evalua- tion. I am also grateful to Audrey Tong, Catherine Cotell, and the anonymous editors of this volume for their com- ments on a previous version of this paper. This work is supported by the Office of the Director of National Intel- ligence (ODNI), Intelligence Advanced Research Projects Activity (IARPA). The views and conclusions contained herein are those of the author and should not be interpreted as necessarily representing the official policies, either ex- pressed or implied, of ODNI, IARPA, or the U.S. Govern- ment. The U.S. Government is authorized to reproduce and distribute reprints for governmental purposes notwithstand- ing any copyright annotation therein.
Bibliographical References. Bibliographical References
Syntactic Structures of the World's Languages. C Collins, R Kayne, New YorkNew York UniversityCollins, C. and Kayne, R. (2011). Syntactic Structures of the World's Languages. New York University, New York.
The World Atlas of Language Structures Online. M Dryer, M Haspelmath, Leipzig, GermanyMax Planck Institute for Evolutionary AnthropologyDryer, M. and Haspelmath, M. (2013). The World Atlas of Language Structures Online. Leipzig, Germany: Max Planck Institute for Evolutionary Anthropology.
Results of the 2006 spoken term detection evaluation. J G Fiscus, J Ajot, J S Garofolo, G Doddingtion, Proceedings of the ACM SIGIR Workshop on Searching Spontaneous Conversational Speech. the ACM SIGIR Workshop on Searching Spontaneous Conversational SpeechACM SIGIRFiscus, J. G., Ajot, J., Garofolo, J. S., and Doddingtion, G. (2007). Results of the 2006 spoken term detection evaluation. In Proceedings of the ACM SIGIR Workshop on Searching Spontaneous Conversational Speech, pages 51-55. ACM SIGIR.
Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. P Littell, D R Mortensen, K Lin, K Kairis, C Turner, Levin , L , Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European Chapter2Short PapersLittell, P., Mortensen, D. R., Lin, K., Kairis, K., Turner, C., and Levin, L. (2017). Uriel and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8-14.
PHOIBLE 2.0. Jena: Max Planck Institute for the Science of Human History. Steven Moran, Steven Moran et al., editors. (2019). PHOIBLE 2.0. Jena: Max Planck Institute for the Science of Human History.
MATERIAL Broad Agency Announcement. C Rubino, Rubino, C. (2017). MATERIAL Broad Agency Announce- ment. https://bit.ly/37gKhV9.
IARPA's Contribution to Human Language Technology Development for Low Resource Languages. C Rubino, Language Technologies for All Conference. Rubino, C. (2019). IARPA's Contribution to Human Lan- guage Technology Development for Low Resource Lan- guages. In Language Technologies for All Conference. UNESCO. https://bit.ly/39e2mD4. |
||
1,152,575 | Bilingual Collocation Extraction Based on Syntactic and Statistical Analyses | In this paper, we describe an algorithm that employs syntactic and statistical analysis to extract bilingual collocations from a parallel corpus. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Therefore, automatic extraction of monolingual and bilingual collocations is important for many applications, including natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval.Collocations can be classified as lexical or grammatical collocations. Lexical collocations exist between content words, while a grammatical collocation exists between a content word and function words or a syntactic structure. In addition, bilingual collocations can be rigid or flexible in both languages. Rigid collocation refers to words in a collocation must appear next to each other, or otherwise (flexible/elastic). We focus in this paper on extracting rigid lexical bilingual collocations. In our method, the preferred syntactic patterns are obtained from idioms and collocations in a machine-readable dictionary. Collocations matching the patterns are extracted from aligned sentences in a parallel corpus. We use a new alignment method based on punctuation statistics for sentence alignment. The punctuation-based approach is found to outperform the length-based approach with precision rates approaching 98%. The obtained collocations are subsequently matched up based on cross-linguistic statistical association. Statistical association between the whole collocations as well as words in collocations is used to link a collocation with its counterpart collocation in the other language. We implemented the proposed method on a very large Chinese-English parallel corpus and obtained satisfactory results. | [
52800448,
17454561,
2132578,
3031527,
6465096,
9558665,
6763915
] | Bilingual Collocation Extraction Based on Syntactic and Statistical Analyses
February 2004
Chien-Cheng Wu
Department of Computer Science
National Tsing Hua University
Address: 101, Kuangfu RoadHsinchuTaiwan
Jason S Chang jschang@cs.nthu.edu.tw
Department of Computer Science
National Tsing Hua University
Address: 101, Kuangfu RoadHsinchuTaiwan
Chien-Cheng Wu
Department of Computer Science
National Tsing Hua University
Address: 101, Kuangfu RoadHsinchuTaiwan
Jason S Chang
Department of Computer Science
National Tsing Hua University
Address: 101, Kuangfu RoadHsinchuTaiwan
Bilingual Collocation Extraction Based on Syntactic and Statistical Analyses
Computational Linguistics and Chinese Language Processing
911February 2004
In this paper, we describe an algorithm that employs syntactic and statistical analysis to extract bilingual collocations from a parallel corpus. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Therefore, automatic extraction of monolingual and bilingual collocations is important for many applications, including natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval.Collocations can be classified as lexical or grammatical collocations. Lexical collocations exist between content words, while a grammatical collocation exists between a content word and function words or a syntactic structure. In addition, bilingual collocations can be rigid or flexible in both languages. Rigid collocation refers to words in a collocation must appear next to each other, or otherwise (flexible/elastic). We focus in this paper on extracting rigid lexical bilingual collocations. In our method, the preferred syntactic patterns are obtained from idioms and collocations in a machine-readable dictionary. Collocations matching the patterns are extracted from aligned sentences in a parallel corpus. We use a new alignment method based on punctuation statistics for sentence alignment. The punctuation-based approach is found to outperform the length-based approach with precision rates approaching 98%. The obtained collocations are subsequently matched up based on cross-linguistic statistical association. Statistical association between the whole collocations as well as words in collocations is used to link a collocation with its counterpart collocation in the other language. We implemented the proposed method on a very large Chinese-English parallel corpus and obtained satisfactory results.
Introduction
Collocations, like terminology, tends to be lexicalized and to have a somewhat more restricted meaning than the surface forms suggest [Justeson and Katz, 1995]. Collocations are recurrent combinations of words that co-occur more often than they normally would based on chance. The words in a collocation may appear next to each other (rigid collocations) or in other locations (flexible/elastic collocations). On the other hand, collocations can also be classified as lexical or grammatical collocations [Benson, Benson, Ilson, 1986]. Lexical collocations exist between content words, while a grammatical collocation exists between a content word and function words or a syntactic structure. Collocations are pervasive in all types of writing and can be found in phrases, chunks, proper names, idioms, and terminology. Collocations in one language are usually difficult to translate directly into another language word for word; therefore, they present a challenge for machine translation systems and second language learners alike.
Automatic extraction of monolingual and bilingual collocations is important for many applications, including natural language generation, word sense disambiguation, machine translation, lexicography, and cross language information retrieval. Hank and Church [1990] pointed out the usefulness of mutual information for identifying monolingual collocations in lexicography. Justeson and Katz [1995] proposed to identify technical terminology based on preferred linguistic patterns and discourse properties of repetition. Among the many general methods presented by Manning and Schutze [1999], the best results can be achieved through filtering based on both linguistic and statistical constraints. Smadja [1993] presented a method called EXTRACT, based on the mean variance of the distance between two collocates , that is capable of computing elastic collocations. Kupiec [1993] proposed to extract bilingual noun phrases using statistical analysis of the co-occurrence of phrases. Smadja, McKeown, and Hatzivassiloglou [1996] extended the EXTRACT approach to handle bilingual collocation based mainly on the statistical measures of the Dice coefficient. Dunning [1993] pointed out the weakness of mutual information and showed that log likelihood ratios are more effective in identifying monolingual collocations, especially when the occurrence count is very low.
Both Smadja and Kupiec used the statistical association between whole collocations in two languages without examining the constituent words. For a collocation and its non-compositional translation equivalent, this approach is reasonable. For instance, with the bilingual collocation ("擠破頭", "stop at nothing") shown in Example 1, it will not be helpful to examine the statistical association between "stopping" and "擠" [ji, squeeze] (or "破" [bo, broken] and "頭" [tou, head] for that matter). However, for the bilingual collocation ("減薪", " pay cut" ) shown in Example 2, considering the statistical association between "pay" and "薪" [xin, wage] as well as between "cut" and "減" [jian, reduce] certainly makes sense. Moreover, we have more data with which to make statistical inferences between words than between phrases. Therefore, measuring the statistical association of collocations based on constituent words will help us cope with the data sparseness problem. We will be able to extract bilingual collocations with high reliability even when they appear together in aligned sentences only once or twice.
Example 1
They are stopping at nothing to get their kids into "star schools" 他們擠破頭也要把孩子送進明星小學 Source: 1995/02 No Longer Just an Academic Question: Educational Alternatives Come to Taiwan
Example 2
Not only haven't there been layoffs or pay cuts, the year-end bonus and the performance review bonuses will go out as usual .
不但不虞裁員、減薪,年終獎金、考績獎金還都照發不誤
Source: 1991/01 Filling the Iron Rice Bowl
Since collocations can be rigid or flexible in both languages, there are, in general, three types of bilingual collocation matches. In Example 1, ("擠破頭","stop at nothing") is a pair of rigid collocation, and ("把…送進", "get … into") is a pair of elastic collocation. In Example 3 ,("走…的路線', "take the path of" ) is an example of a pair of elastic and rigid collocations.
Example 3
Lin Ku-fang, a worker in ethnomusicology, worries too, but his way is not to take the path of revolutionizing Chinese music or making it more "symphonic"; rather, he goes directly into the tradition, looking into it for "good music" that has lasted undiminished for a hundred generations. 民族音樂工作者林谷芳也非不感到憂心,但他的方法是:不走國樂改革或 「交響化」 的路,而是直接面對傳統、從中尋找歷百代不衰的 「好聽音樂」 。 Source: 1997/05 A Contemporary Connoisseur of the Classical Age--Lin Ku-fang's Canon of Chinese Classical Music
In this paper, we describe an algorithm that employs syntactic and statistical analyses to extract rigid lexical bilingual collocations from a parallel corpus. Here, we focus on bilingual collocations, which have some lexical correlation between them and are rigid in both languages. To cope with the data sparseness problem, we use the statistical association between two collocations as well as that between their constituent words. In Section 2, we describe how we obtain the preferred syntactic patterns from collocations and idioms in a machine-readable dictionary. Examples will be given to show how collocations matching the patterns are extracted and aligned for given aligned sentence pairs in a parallel corpus. We implemented the proposed method in an experiment on the Chinese-English parallel corpus of Sinorama Magazine and obtained satisfactory results. We describe the experiments and our evaluation in section 3. The limitations of the study and related issues are taken up in section 4. We conclude and give future directions of research in section 5.
Extraction of Bilingual Collocations
In this chapter, we will describe how we obtain bilingual collocations by using preferred syntactic patterns and associative information. Consider a pair of aligned sentences in a parallel corpus such as that shown in Example 4 below:
Example 4
The civil service rice bowl, about which people always said "you can't get filled up, but you won't starve to death either," is getting a new look with the economic downturn. Not only haven't there been layoffs or pay cuts, the year-end bonus and the performance review bonuses will go out as usual, drawing people to compete for their own "iron rice bowl." 以往一向被認為「吃不飽、餓不死」的公家飯,值此經濟景氣低迷之際, 不但不虞裁員、減薪,年終獎金、考績獎金還都照發不誤,因而促使不少 人回頭競逐這隻「鐵飯碗」。 Source: 1991/01 Filling the Iron Rice Bowl
We can extract the following collocations and translation counterparts:
(civil service rice bowl, 公家飯) (get filled up, 吃…飽) (starve to death, 餓…死) (economic downturn, 經濟景氣低迷) (pay cuts, 減薪)
(year-end bonus, 年終獎金) (performance review bonuses, 考績獎金) (iron rice bowl, 鐵飯碗)
In section 2.1, we will first show how that process is carried out for Example 4 using the proposed approach. A formal description of our method will be given in section 2.2.
An Example of Extracting Bilingual Collocations
To extract bilingual collocations, we first run part of speech tagger on both sentences. For instance, for Example 4, we get the results of tagging shown in Examples 4A and 4B.
In the tagged English sentence, we identify phrases that follow a syntactic pattern from a set of training data of collocations. For instance, "jj nn" is one of the preferred syntactic structures. Thus, "civil service," "economic downturn," "own iron" etc are matched. See Table 1 for more details. For Example 4, the phrases shown in Examples 4C and 4D are considered to be potential candidates for collocations because they match at least two distinct collocations listed in LDOCE:
Example 4A
The/at civil/jj service/nn rice/nn bowl/nn ,/, about/in which/wdt people/nns always/rb said/vbd "/`` you/ppss can/md 't/* get/vb filled/vbn up/rp ,/, but/cc you/ppss will/md 't/* starve/vb to/in death/nn either/cc ,/rb "/'' is/bez getting/vbg a/at new/jj look/nn with/in the/at economic/jj downturn/nn ./. Not/nn only/rb have/hv 't/* there/rb been/ben layoffs/nns or/cc pay/vb cuts/nns ,/, the/at year/nn -/in end/nn bonus/nn and/cc the/at performance/nn review/nn bonuses/nn will/md go/vb out/rp as/ql usual/jj ,/, drawing/vbg people/nns to/to compete/vb for/in their/pp$ own/jj "/`` iron/nn rice/nn bowl/nn ./. "/''
Example 4B
以往/Nd 一向/Dd 被/P02 認為/VE2 「/PU 吃/VC 不/Dc 飽/VH 、/PU 餓不死/VR 」/PU 的/D5 公家/Nc 飯/Na ,/PU 值此/Ne 經濟/Na 景氣/Na 低迷/VH 之際/NG ,/PU 不但/Cb 不虞/VK 裁員/VC 、/PU 減薪/VB ,/PU 年終獎金/Na 、/PU 考績/Na 獎金/Na 還都/Db 照/VC 發/VD 不誤/VH , /PU 因而/Cb 促使/VL 不少/Ne 人/Na 回頭/VA 競逐/VC 這/Ne 隻/Nf 「/PU 鐵飯碗/Na 」/PU
Example 4C
"civil service," "rice bowl," "iron rice bow," "fill up," "economic downturn," "end bonus," "year -end bonus," "go out," "performance review," "performance review bonus," "pay cut," "starve to death," "civil service rice," "service rice," "service rice bowl," "people always," "get fill," "people to compete," "layoff or pay," "new look," "draw people"
Example 4D
"吃不飽," "餓不死," "公家飯," "經濟景氣," "景氣低迷," "經 濟景氣低迷," "裁員," "減薪," "年終獎金," "考績獎金," "競 逐," "鐵飯碗." Although "new look" and "draw people" are legitimate phrases, they are more like "free combinations" than collocations. That is reflected by their low log likelihood ratio values. For this research, we proceed to determine how tightly the two words in overlapping bigrams within a collocation are associated with each other; we calculate the minimum of the log likelihood ratio values for all the bigrams. Then, we filter out the candidates whose POS patterns appear only once or have minimal log likelihood ratios of less than 7.88. See Tables 1 and 2 for more details.
In the tagged Chinese sentence, we basically proceed in the same way to identify the candidates of collocations, based on the preferred linguistic patterns of the Chinese translations of collocations in an English-Chinese MRD. However, since there is no space delimiter between words, it is at times difficult to say whether a translation is a multi-word collocation or a single word, in which case it should not be considered as a collocation. For this reason, we take multiword and singleton phrases (with two or more characters) into consideration. For instance, in tagged Example 4, we extract and consider these candidates shown in Tables 1 and 2 as the counterparts of English collocations.
Notes that at this point, we have not pinned collocations down but allow overlapping and conflicting candidates such as "經濟景氣," "景氣低迷," and "經濟景氣低迷." See Tables 3 and 4 for more details. To align collocations in both languages, we employ the Competitive Linking Algorithm proposed by Melamed [1996] to conduct word alignment. Basically, the proposed algorithm CLASS, the Collocation Linking Algorithm based on Syntax and Statistics, is a greedy method that selects collocation pairs. The pair with the highest association value takes precedence over those with lower values. CLASS also imposes a one-to-one constraint on the collocation pairs selected. Therefore, the algorithm at each step considers only pairs with words that haven't been selected previously. However, CLASS differs with CLA(Competitive Linking Algorithm) in that it considers the association between the two candidate collocations based on two measures: the Logarithmic Likelihood Ratio between the two collocations in question as a whole; the translation probability of collocation based on constituent words.
Table 1. The initial candidates extracted based on preferred patterns trained on collocations listed in LDOCE ( LDOCE example: the example for the POS pattern in LDOCE; Pattern Count: the number of POS patterns occurring in LDOCE ; Min LLR : the minimal LLR value of every two words in the candidate pairs.)
E-collocation
In the case of Example 4, the CLASS algorithm first calculates the counts of collocation candidates in the English and Chinese parts of the corpus. The collocations are matched up randomly across from English to Chinese. Subsequently, the co-occurrence counts of these candidates matched across from English to Chinese are also tallied. From the monolingual collocation candidate counts and cross language concurrence counts, we produce the LLR values and the collocation translation probability derived from word alignment analysis. Those collocation pairs with zero translation probability are ignored. The lists are sorted in descending order of LLR values, and the pairs with low LLR value are discarded. Again, in the case of Example 4, the greedy selection process of collocation starts with the first entry in the sorted list and proceeds as follows:
1. The first, third, and fourth pairs, ("iron rice bowl," "鐵飯碗"), ("year-end bonus," "年 終獎金"), and ("economic downturn," "經濟景氣低迷"), are selected first. Thus, conflicting pairs will be excluded from consideration, including the second pair, fifth pair and so on. 2. The second entry ("rice bowl," "鐵飯碗"), fifth entry ("economic downturn," "值此經 濟景氣") and so on conflict with the second and third entries that have already been selected. Therefore, CLASS skips over these entries.
3. The entries ("performance review bonus," "考績獎金"), ("civil service rice," "公家 飯"), ("pay cuts," "減薪"), and ("starve to death," "餓不死") are selected next.
4. CLASS proceeds through the rest of the list and the other list without finding any entries that do not conflict with the seven entries previously selected. 5. The program terminates and outputs a list of seven collocations.
The Method
In this section, we describe formally how CLASS works. We assume the availability of a parallel corpus and a list of collocations in a bilingual MRD. We also assume that the sentences and words have been aligned in the parallel corpus. We will describe how CLASS extracts bilingual collocations from such a parallel corpus. CLASS carries out a number of preprocessing steps to calculate the following information:
1. lists of preferred POS patterns of collocation in both languages;
2. collocation candidates matching the preferred POS patterns;
3. n-gram statistics for both languages, N = 1, 2;
4. log likelihood ratio statistics for two consecutive words in both languages;
5. log likelihood ratio statistics for a pair of candidates of bilingual collocations across one language to the other;
6. content word alignment based on the Competitive Linking Algorithm [Melamed, 1997.] Figure 1 illustrates how the method works for each aligned sentence pair (C, E) in the corpus. Initially, part of speech taggers process C and E. After that, collocation candidates are extracted based on preferred POS patterns and statistical association between consecutive words in a collocation. The collocation candidates are subsequently matched up from one language to the other. These pairs are sorted according to the log likelihood ratio and collocation translation probability. A greedy selection process goes through the sorted list and selects bilingual collocations subject to one-to-one constraint. The detailed algorithm is given below:
Figure 1. The major components in the proposed CLASS algorithm.
Preprocessing: Extracting preferred POS patterns P and Q in both languages
Input: A list of bilingual collocations from a machine-readable dictionary Output:
1.
Perform part of speech tagging for both languages.
2.
Calculate the number of instances for all POS patterns in both languages.
3.
Eliminate the POS patterns with instance counts of 1.
Collocation Linking Alignment based on Syntax and Statistics
Extract bilingual collocations from aligned sentences.
Input:
(1) A pair of aligned sentences (C, E), C = (C 1 C 2 … C n ) and E = (E 1 E 2 … E m ).
(2) Preferred POS patterns P and Q in both languages.
Output: Aligned bilingual collocations in (C, E)
1. C is segmented and tagged with part of speech information T.
2. E is tagged with part of speech sequences S.
Log-likelihood ratio: LLR(x;y)
2 2 2 1 1 1 2 2 2 1 1 1 ) 1 ( ) 1 ( ) 1 ( ) 1 ( log 2 )
; ( k in English and Y 1 , Y 2 , ...,Y e in Chinese. 4. Consider each bilingual collocation candidate (X i , Y j ) in turn and calculate the minimal log likelihood ratio LLR between X i and Y j :
MLLR (D) = ) , ( 1 i i 1 , 1 min + − = W W LLR n i .
5. Eliminate candidates with LLR that are smaller than a threshold (7.88).
Match up all possible links from
English collocation candidates to Chinese ones: (D 1 , F 1 ), (D 1 , F 2 ), … (D i , F j ), … ( D m , F n ).
7. Calculate LLR for (D i , F j ) and discard pairs with LLR value that are lower than 7.88.
Collocation translation probability P(x | y)
Experiments and Evaluation
We have implemented CLASS using the Longman Dictionary of Contemporary English, English-Chinese Edition, and the parallel corpus of Sinorama magazine. The articles from
Sinorama covered a wide range of topics, reflecting the personalities, places, and events in Taiwan for the previous three decades. We experimented on articles mainly dating from 1995 to 2002. Sentence and word alignment were carried out first to obtain the Sinorama Parallel Corpus.
Sentence alignment is a very important aspect of CLASS. It is the basis for good collocation alignment. We use a new alignment method based on punctuation statistics [Yeh & Chang, 2002]. The punctuation-based approach has been found to outperform the length-based approach with precision rates approaching 98%. With the sentence alignment approach, we obtained approximately 50,000 reliably aligned sentences containing 1,756,000 Chinese words (about 2,534,000 Chinese characters) and 2,420,000 English words in total.
The content words were aligned using the Competitive Linking Algorithm. Alignment of content words resulted in a probabilistic dictionary with 229,000 entries. We evaluated 100 random sentence samples with 926 linking types, and the achieved precision rate was 93.3%. Most of the errors occurred with English words having no counterpart in the corresponding Chinese sentence. Translators do not always translate word for word. For instance, with the word "water" in Example 5, it seems that there is no corresponding pattern in the Chinese sentence. Another major cause of errors was collocations that were not translated compositionally. For instance, the word "State" in the Example 6 is a part of the collocation "United States," and "美國" is more highly associated with "United" than "States"; therefore, due to the one-to-one constraint "States" will not be aligned with "美國". Most often, it will be aligned incorrectly. About 49% of the error links were of this type.
Example 5
The boat is indeed a vessel from the mainland that illegally entered Taiwan waters. The words were a "mark" added by the Taiwan Garrison Command before sending it back.
編按:此船的確是大陸偷渡來台船隻,那八個字只不過是警總在遣 返前給它加的「記號」!
Source: 1990/10 Letters to the Editor Take "pay" as an example. Table 6 shows the various alignment translations for "pay" and the translation probability. Before running CLASS, we obtained 10,290 English idioms, collocations, and phrases together with 14,945 Chinese translations in LDOCE. After part of speech taggi ng, we had 1,851 distinct English patterns and 4326 Chinese patterns. To calculate the statistical association within words in a monolingual collocation and across the bilingual collocations, we built N-grams for the Sinorama Parallel Corpus. There were 790,000 Chinese word bigrams and 669,000 distinct English bigrams. CLASS identified around 595,000 Chinese collocation candidates (184,000 distinct types) and 230,000 English collocation candidates (135,000 distinct types) through this process.
We selected 100 sentences to evaluate the performance. We focused on rigid lexical collocations. The average English sentence had 45.3 words, while the average Chinese sentence had 21.4 words. The two human judges, both master students majoring in Foreign Languages, identified the bilingual collocations in these sentences. We then compared the bilingual collocations produced by CLASS against the answer keys. The evaluation produced an average recall rate = 60.9 % and precision rate = 85.2 % (see Table 7).
Discussion
This paper describes a new approach to the automatic acquisition of bilingual collocations from a parallel corpus. Our method is an extension of Melamed's Competitive Linking Algorithm for word alignment. It combines both linguistic and statistical information and uses it to recognize monolingual and bilingual collocations in a much simpler way than Smadja's work does. Our approach differs from previous work in the following ways:
1. We use a data-driven approach to extract monolingual collocations.
2. Unlike Smadja and Kupiec, we do not commit to two sets of monolingual collocations. Instead, we consider many overlapping and conflicting candidates and rely on cross linguistic statistics to revolve the issue.
3. We combine both type of information related to the whole collocation as well as to the constituent words to achieve more reliable probabilistic estimation of aligned collocations.
Our approach is limited by its reliance on training data consisting of mostly rigid collocation patterns, and it is not applicable to elastic collocations such as "jump on … bandwagon." For instance, the program cannot handle the elastic collocation in the following example:
Example 7
台灣幸而趕搭了一程獲利豐厚的順風車,可以將目前剛要起步的馬來西 亞、中國大陸等國家遠拋身後。T
Taiwan has had the good fortune to jump on this high-profit bandwagon and has been able to snatch a substantial lead over countries like Malaysia and mainland China, which have just started in this industry.
Source: Sinorama, 1996, Dec Issue Page 22, Stormy Waters for Taiwan's ICs
This limitation can be partially alleviated by matching nonconsecutive word sequences against existing lists of collocations for the two languages.
Another limitation has to do with bilingual collocations, which are not literal translations. For instance, "difficult and intractable" can not yet be handled by the program, because it is not a word for word translation of "桀傲不馴".
Example 8
意思是說一個再怎麼桀傲不馴的人,都會有人有辦法制服他。 This saying means that no matter how difficult and intractable a person may seem, there will always be someone else who can cut him down to size.
Source: 1990/05 A Fierce Horse Ridden by a Fierce Rider
In the experiment, we found that this limitation may be partially solved by splitting the candidate list of bilingual collocations into two lists: one (NZ) with non-zero phrase translation probabilistic values and the other (ZE) with zero values. The two lists can then be sorted based on the LLR values. After extracting bilingual collocations from the NZ list, we could continue to go down the ZE list and select bilingual collocations that did not conflict with previously selection.
In the proposed method, we do no take advantage of the correspondence between POS patterns in one language with those in the other. Some linking mistakes seem to be avoidable if POS information is used. For example, the aligned collocation for "issue/vb visas/nns" is "簽證/Na", not "發/VD 簽證/Na." However, the POS pattern "vb nn" appears to be more compatible with "VD Na" than with "Na."
Example 9
一九七二年澳洲承認中共,中華民國即於此時與澳斷交。因為無正式邦 交,澳洲不能在台灣發簽證,而由澳洲駐香港的使館代辦,然後將簽證送 回台灣,簽證手續約需五天至一周。 The Republic of China broke relations with Australia in 1972, after the country recognized the Chinese Communists, and because of the lack of formal diplomatic relations, Australia felt it could not issue visas on Taiwan. Instead, they were handled through its consulate in Hong Kong and then sent back to Taiwan, the entire process requiring five days to a week to complete.
Source: 1990/04 Visas for Australia to Be Processed in Just 24 Hours A number of mistakes are caused by erroneous word segments in the Chinese tagger. For instance, "大學及研究生出國期間" should be segmented as " 大學 / 及 / 研究生 / 出國 / 期間" but instead is segmented as "大學 / 及 / 研究 / 生出 / 國 / 期間 / 的 / 學業." Another major source of segmentation mistakes has to do with proper names and their transliterations. These name entities that are not included in the database are usually segmented into single Chinese characters. For instance, "...一書作者劉學銚指出..." is segmented as " ... / 一 / 書 / 作者 / 劉 / 學 / 銚 / 指出 / ...," while "...在匈牙利地區 建國的馬札爾人..." is segmented as "...在 / 匈牙利 / 地區 / 建國 / 的 / 馬 / 札 / 爾 / 人 / ...." Therefore, handling these name entities in a pre-process should be helpful to avoid segmenting mistakes and alignment difficulties.
Conclusion and Future Work
In this paper, we have presented an algorithm that employs syntactic and statistical analyses to extract rigid bilingual collocations from a parallel corpus. Phrases matching the preferred patterns are extracted from aligned sentences in a parallel corpus. These phrases are subsequently matched up based on cross-linguistic statistical association. Statistical association between the whole collocations as well as words in the collocations is used jointly to link a collocation with its counterpart. We implemented the proposed method on a very large Chinese-English parallel corpus and obtained satisfactory results.
A number of interesting future directions suggest themselves. First, it would be interesting to see how effectively we can extend the method to longer and elastic collocations and to grammatical collocations. Second, bilingual collocations that are proper names and transliterations may need additional consideration. Third, it will be interesting to see if the performance can be improved using cross language correspondence between POS patterns. Smadja, F. "Retrieving collocations from text: Xtract." Computational Linguistics, 19 (1)
number of words in the English collocation F j 8. The only candidate list of bilingual collocations considered is the one with non-zero collocation translation probability P(D i , F j ) values. The list is then sorted based on the LLR values and collocation translation probability. 9. Go down the list and select a bilingual collocation if it does not conflict with a previous selection.10. Output the bilingual collocation selected in Step 9.
Example 6 Figures
6issued by the American Immigration Bureau show that most Chinese immigrants had set off from Kwangtung and Hong Kong, which is why the majority of overseas Chinese in the United States to this day are of Cantonese origin.由美國移民局發表的數字來看,中國移民以從廣東、香港出海者最多,故 到現在為止,美國華僑仍以原籍廣東者佔大多數。Source: 1990/09 All Across the World: The Chinese Global VillageWe obtained the word-to-word translation probability from the result of word alignment. The translation probability P(c|e) is calculated as followed:count(e,c) : the number of alignment links between a Chinese word c and an English word e; count(e) : the number of instances of e in alignment links.
Table 2 .
2The candidates of English collocations based on both preferred linguistic patterns and log likelihood ratios.E-collocation
Candidate Pairs
Part of Speech
LDOCE
example
Pattern Count Min LLR
civil service
jj nn
hard cash
1562
496.156856
rice bowl
nn nn
beef steak
1860
99.2231161
iron rice bowl
nn nn nn
tin pan alley
8
66.3654678
filled up
vbn rp
set down
84
55.2837871
economic downturn
jj nn
hard cash
1562
51.8600979
*end bonus
nn nn
beef steak
1860
15.9977283
year -end bonus
nn nn nn
tin pan alley
12
15.9977283
go out
vb rp
bang out
1790
14.6464925
performance review
nn nn
beef steak
1860
13.5716459
performance review bonus nn nn nn
tin pan alley
8
13.5716459
pay cut
vb nn
take action
313
8.53341082
starve to death
vb to nn
bring to bay
26
7.93262494
civil service rice
jj nn nn high water mark
19
7.88517791
*service rice
nn nn
beef steak
1860
7.88517791
Table 3 .
3The initial candidates extracted by the Chinese collocation recognizer.C-collocation
Candidate Pairs
POS
LDOCE
example
Patter Count Min LLR
不少 人
Ed Na
本國語
2
550.904793
*被 認為
PP VE
待考慮
6
246.823964
景氣 低迷
Na VH
視力不良
97
79.8159904
經濟 景氣 低迷 Na Na VH 宗教信仰自由
3
47.2912274
經濟 景氣
Na Na
生活津貼
429
47.2912274
公家 飯
Nc Na
全國大選
63
42.6614685
*不 飽
Dc VH
毫無困難
24
37.3489687
考績 獎金
Na Na
生活津貼
429
36.8090448
不虞 裁員
VJ VA
引起爭吵
3
17.568518
回頭 競逐
VA VC
豎耳傾聽
26
14.7120606
*還都 照
Db VC
無法參與
18
14.1291893
*發 不誤
VD VH
供應充份
2
13.8418648
*低迷 之際
VH NG
兩可之間
10
11.9225789
*值此 經濟 景氣 VA Na Na 浮球活栓
2
9.01342071
*值此 經濟
VA Na
劃線支票
94
9.01342071
*照 發
VC VD
登記歸還
2
6.12848087
*人 回頭
Na VA
安危未卜
27
1.89617179
* indicates an invalid candidate (based on human judgment )
Table 4 .
4The result of Chinese collocation candidates which are picked out. (The ones which have no Min LLR are singleton phrases.)C-collocation
Candidate Pairs
POS
LDOCE
example Patter Count Min LLR
不少 人
Ed Na
本國語
2
550.904793
*被 認為
PP VE
待考慮
6
246.823964
景氣 低迷
Na VH
視力不良
97
79.8159904
經濟 景氣 低迷 Na Na VH 宗教信仰自由
3
47.2912274
經濟 景氣
Na Na
生活津貼
429
47.2912274
公家 飯
Nc Na
全國大選
63
42.6614685
*不 飽
Dc VH
毫無困難
24
37.3489687
考績 獎金
Na Na
生活津貼
429
36.8090448
不虞 裁員
VJ VA
引起爭吵
3
17.568518
Table 5 .
5The extracted Chinese collocation candidates which are picked out. The shaded collocation pairs are selected by CLASS (Greedy Alignment Linking E). collocations Chinese collocations LLR Collocation Translation Prob.English iron rice bowl
鐵飯碗
103.3
0.0202
rice bowl
鐵飯碗
77.74
0.0384
year-end bonus
年終獎金
59.21
0.0700
economic downturn
經濟 景氣 低迷 32.4
0.9359
economic downturn
值此 經濟 景氣 32.4
0.4359
...
...
...
...
performance review bonus
考績 獎金
30.32
0.1374
economic downturn
景氣 低迷
29.82
0.2500
civil service rice
公家 飯
29.08
0.0378
Table 6 .
6The aligned translations for the English word "pay" and their translation probability.Translation Count
Translation Prob.
Translation
Count
Translation Prob.
代價
34
0.1214
花錢
7
0.025
錢
31
0.1107
出錢
6
0.0214
費用
21
0.075
租
6
0.0214
付費
16
0.0571
發給
6
0.0214
領
16
0.0571
付出
5
0.0179
繳
16
0.0571
薪資
5
0.0179
支付
13
0.0464
付錢
4
0.0143
給
13
0.0464
加薪
4
0.0143
薪水
11
0.0393
...
...
...
負擔
9
0.0321
積欠
2
0.0071
費
9
0.0321
繳款
2
0.0071
給付
8
0.0286
Table 7 .
7Experiment results of bilingual collocation from the Sinorama ParallelCorpus.# keys
#answers
#hits
#errors
Recall
Precision
382
273
233
40
60.9%
85.2%
1993, pp143-177. Smadja, F., K.R. McKeown, and V. Hatzivassiloglou. "Translating collocations for bilingual lexicons: A statistical approach." Computational Linguistics, 22(1) ,1996, pp 1-38. Yeh, "Using Punctuation Marks for Bilingual Sentence Alignment." Master thesis, 2003, National Tsing Hua University, Taiwan
2 1 1 2 k n
. Match T against P and match S against Q to extract collocation candidates X 1 , X 2 ,....X
k 1 : # of pairs that contain x and y simultaneously. k 2 : # of pairs that contain x but do not contain y. n 1 : # of pairs that contain y n 2 : # of pairs that does not contain y p 1 = k 1 /n 1, p 2 = k 2 /n 2 , p = (k 1 +k 2 )/(n 1 +n 2 )
The BBI Combinatory Dictionary of English: A Guide to Word Combinations. Morton Benson, Evelyn Benson, Robert Ilson, John Benjamins. Benson, Morton., Evelyn Benson, and Robert Ilson." The BBI Combinatory Dictionary of English: A Guide to Word Combinations. " John Benjamins, Amsterdam, Netherlands, 1986.
Conference on User-Oriented Context Based Text and Image Handling. Y Choueka, RIAO. Looking for needles in a haystackChoueka, Y. "Looking for needles in a haystack", RIAO, Conference on User-Oriented Context Based Text and Image Handling, Cambridge, 1988, pp. 609-623.
Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus. Y Choueka, Klein, E Neuwitz, Journal of the Association for Literary and Linguistic Computing. 41Choueka, Y.; Klein, and Neuwitz, E.. "Automatic retrieval of frequent idiomatic and collocational expressions in a large corpus." Journal of the Association for Literary and Linguistic Computing, 4(1), 1983, pp34-38.
Word association norms, mutual information, and lexicography. K W Church, P Hanks, Computational Linguistics. 161Church, K. W. and Hanks, P. "Word association norms, mutual information, and lexicography." Computational Linguistics, 16(1) , 1990, pp. 22-29.
Termight: Identifying and translation technical terminology. I Dagan, K Church, Proc. of the 4th Conference on Applied Natural Language Processing (ANLP). of the 4th Conference on Applied Natural Language essing (ANLP)Dagan, I. and K. Church. "Termight: Identifying and translation technical terminology". In Proc. of the 4th Conference on Applied Natural Language Processing (ANLP), 1994, pages 34-40.
Accurate methods for the statistics of surprise and coincidence. T Dunning, Computational Linguistics. 191Dunning, T. "Accurate methods for the statistics of surprise and coincidence", Computational Linguistics 19:1, 1993, pp.61-75.
Learning bilingual collocations by word-level sorting. M Haruno, S Ikehara, T Yamazaki, Proc. of the 16th International Conference on Computational Linguistics (COLING '96). of the 16th International Conference on Computational Linguistics (COLING '96)Haruno, M., S. Ikehara, and T. Yamazaki. "Learning bilingual collocations by word-level sorting." In Proc. of the 16th International Conference on Computational Linguistics (COLING '96), 1996, pp. 525-530.
Character-based Collocation for Mandarin Chinese. C.-R Huang, K.-J Chen, Y.-Y. Yang, Proc. of the 38th Annual Meeting of the Association for Computational Linguistics. of the 38th Annual Meeting of the Association for Computational LinguisticsHuang, C.-R., K.-J. Chen, Y.-Y. Yang, "Character-based Collocation for Mandarin Chinese", In Proc. of the 38th Annual Meeting of the Association for Computational Linguistics, 2000, pp. 540-543.
Acquiring collocations for lexical choice between near-synonyms. Diana Inkpen, Zaiu, Graeme Hirst, Proceedings of the Workshop on Unsupervised Lexical Acquisition, 40th Annual Meeting of the Association for Computational Lin-guistics. the Workshop on Unsupervised Lexical Acquisition, 40th Annual Meeting of the Association for Computational Lin-guisticsInkpen, Diana Zaiu and Hirst, Graeme. "Acquiring collocations for lexical choice between near-synonyms." In Proceedings of the Workshop on Unsupervised Lexical Acquisition, 40th Annual Meeting of the Association for Computational Lin-guistics (ACL 2002), 2002, pp. 67-76.
Technical Terminology: some linguistic properties and an algorithm for identification in text. J S Justeson, M Slava, Katz, Natural Language Engineering. 11Justeson, J.S. and Slava M. Katz. "Technical Terminology: some linguistic properties and an algorithm for identification in text." Natural Language Engineering, 1(1), 1995, pp. 9-27.
An algorithm for finding noun phrase correspondences in bilingual corpora. Julian Kupiec, Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics. the 31st Annual Meeting of the Association for Computational LinguisticsKupiec, Julian. "An algorithm for finding noun phrase correspondences in bilingual corpora." In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics, 1993, pp. 17-22.
Using collocation statistics in information extraction. D Lin, Proc. of the Seventh Message Understanding Conference (MUC-7). of the Seventh Message Understanding Conference (MUC-7)Lin, D. "Using collocation statistics in information extraction." In Proc. of the Seventh Message Understanding Conference (MUC-7), 1998.
Foundations of Statistical Natural Language Processing. H Manning, Schutze, C. MIT PressManning and H. Schutze. "Foundations of Statistical Natural Language Processing," C., MIT Press, 1999.
A Word-to-Word Model of Translational Equivalence. I Melamed, Dan, Proceedings of the 35st Annual Meeting of the Association for Computational Linguistics. the 35st Annual Meeting of the Association for Computational LinguisticsMelamed, I. Dan. "A Word-to-Word Model of Translational Equivalence." In Proceedings of the 35st Annual Meeting of the Association for Computational Linguistics, 1997, pp 490-497. |
1,515,014 | Contextual Phrase-Level Polarity Analysis using Lexical Affect Scoring and Syntactic N-grams | We present a classifier to predict contextual polarity of subjective phrases in a sentence. Our approach features lexical scoring derived from the Dictionary of Affect in Language (DAL) and extended through WordNet, allowing us to automatically score the vast majority of words in our input avoiding the need for manual labeling. We augment lexical scoring with n-gram analysis to capture the effect of context. We combine DAL scores with syntactic constituents and then extract ngrams of constituents from all sentences. We also use the polarity of all syntactic constituents within the sentence as features. Our results show significant improvement over a majority class baseline as well as a more difficult baseline consisting of lexical n-grams. | [
10807721,
976653,
8162001,
6627923,
6541910
] | Contextual Phrase-Level Polarity Analysis using Lexical Affect Scoring and Syntactic N-grams
Apoorv Agarwal
Department of Computer Science
Department of Computer Science
Columbia University New York
USA
Fadi Biadsy
Department of Computer Science
Columbia University New York
USA
Kathleen R Mckeown
Columbia University New York
USA
Contextual Phrase-Level Polarity Analysis using Lexical Affect Scoring and Syntactic N-grams
March -3 April 2009. c 2009 Association for Computational Linguistics
We present a classifier to predict contextual polarity of subjective phrases in a sentence. Our approach features lexical scoring derived from the Dictionary of Affect in Language (DAL) and extended through WordNet, allowing us to automatically score the vast majority of words in our input avoiding the need for manual labeling. We augment lexical scoring with n-gram analysis to capture the effect of context. We combine DAL scores with syntactic constituents and then extract ngrams of constituents from all sentences. We also use the polarity of all syntactic constituents within the sentence as features. Our results show significant improvement over a majority class baseline as well as a more difficult baseline consisting of lexical n-grams.
Introduction
Sentiment analysis is a much-researched area that deals with identification of positive, negative and neutral opinions in text. The task has evolved from document level analysis to sentence and phrasal level analysis. Whereas the former is suitable for classifying news (e.g., editorials vs. reports) into positive and negative, the latter is essential for question-answering and recommendation systems. A recommendation system, for example, must be able to recommend restaurants (or movies, books, etc.) based on a variety of features such as food, service or ambience. Any single review sentence may contain both positive and negative opinions, evaluating different features of a restaurant. Consider the following sentence (1) where the writer expresses opposing sentiments towards food and service of a restaurant. In tasks such as this, therefore, it is important that sentiment analysis be done at the phrase level.
(1) The Taj has great food but I found their service to be lacking.
Subjective phrases in a sentence are carriers of sentiments in which an experiencer expresses an attitude, often towards a target. These subjective phrases may express neutral or polar attitudes depending on the context of the sentence in which they appear. Context is mainly determined by content and structure of the sentence. For example, in the following sentence (2), the underlined subjective phrase seems to be negative, but in the larger context of the sentence, it is positive. 1
(2) The robber entered the store but his efforts were crushed when the police arrived on time.
Our task is to predict contextual polarity of subjective phrases in a sentence. A traditional approach to this problem is to use a prior polarity lexicon of words to first set priors on target phrases and then make use of the syntactic and semantic information in and around the sentence to make the final prediction. As in earlier approaches, we also use a lexicon to set priors, but we explore new uses of a Dictionary of Affect in Language (DAL) (Whissel, 1989) extended using WordNet (Fellbaum, 1998). We augment this approach with n-gram analysis to capture the effect of context. We present a system for classification of neutral versus positive versus negative and positive versus negative polarity (as is also done by ). Our approach is novel in the use of following features:
• Lexical scores derived from DAL and extended through WordNet: The Dictionary of Affect has been widely used to aid in interpretation of emotion in speech (Hirschberg et al., 2005). It contains numeric scores assigned along axes of pleasantness, activeness and concreteness. We introduce a method for setting numerical priors on words using these three axes, which we refer to as a "scoring scheme" throughout the paper. This scheme has high coverage of the phrases for classification and requires no manual intervention when tagging words with prior polarities.
• N-gram Analysis: exploiting automatically derived polarity of syntactic constituents We compute polarity for each syntactic constituent in the input phrase using lexical affect scores for its words and extract n-grams over these constituents. N-grams of syntactic constituents tagged with polarity provide patterns that improve prediction of polarity for the subjective phrase.
• Polarity of Surrounding Constituents: We use the computed polarity of syntactic constituents surrounding the phrase we want to classify. These features help to capture the effect of context on the polarity of the subjective phrase.
We show that classification of subjective phrases using our approach yields better accuracy than two baselines, a majority class baseline and a more difficult baseline of lexical n-gram features.
We also provide an analysis of how the different component DAL scores contribute to our results through the introduction of a "norm" that combines the component scores, separating polar words that are less subjective (e.g., Christmas , murder) from neutral words that are more subjective (e.g., most, lack).
Section 2 presents an overview of previous work, focusing on phrasal level sentiment analysis. Section 3 describes the corpus and the gold standard we used for our experiments. In section 4, we give a brief description of DAL, discussing its utility and previous uses for emotion and for sentiment analysis. Section 5 presents, in detail, our polarity classification framework. Here we describe our scoring scheme and the features we extract from sentences for classification tasks. Experimental set-up and results are presented in Section 6. We conclude with Section 7 where we also look at future directions for this research.
Literature Survey
The task of sentiment analysis has evolved from document level analysis (e.g., (Turney., 2002); (Pang and Lee, 2004)) to sentence level analysis (e.g., (Hu and Liu., 2004); (Kim and Hovy., 2004); (Yu and Hatzivassiloglou, 2003)). These researchers first set priors on words using a prior polarity lexicon. When classifying sentiment at the sentence level, other types of clues are also used, including averaging of word polarities or models for learning sentence sentiment.
Research on contextual phrasal level sentiment analysis was pioneered by , who used manually developed patterns to identify sentiment. Their approach had high precision, but low recall. also explore contextual phrasal level sentiment analysis, using a machine learning approach that is closer to the one we present. Both of these researchers also follow the traditional approach and first set priors on words using a prior polarity lexicon. use a lexicon of over 8000 subjectivity clues, gathered from three sources ( (Riloff and Wiebe, 2003); (Hatzivassiloglou and McKeown, 1997) and The General Inquirer 2 ). Words that were not tagged as positive or negative were manually labeled. acquired words from GI, DAL and WordNet. From DAL, only words whose pleasantness score is one standard deviation away from the mean were used. Nasukawa as well as other researchers (Kamps and Marx, 2002)) also manually tag words with prior polarities. All of these researchers use categorical tags for prior lexical polarity; in contrast, we use quantitative scores, making it possible to use them in computation of scores for the full phrase.
While aim at phrasal level analysis, their system actually only gives "each clue instance its own label" [p. 350]. Their gold standard is also at the clue level and assigns a value based on the clue's appearance in different expressions (e.g., if a clue appears in a mixture of negative and neutral expressions, its class is negative). They note that they do not determine subjective expression boundaries and for this reason, they classify at the word level. This approach is quite different from ours, as we compute the polarity of the full phrase. The average length of the subjective phrases in the corpus was 2.7 words, with a standard deviation of 2.3. Like we do not attempt to determine the boundary of subjective expressions; we use the labeled boundaries in the corpus.
Corpus
We used the Multi-Perspective Question-Answering (MPQA version 1.2) Opinion corpus for our experiments. We extracted a total of 17,243 subjective phrases annotated for contextual polarity from the corpus of 535 documents (11,114 sentences). These subjective phrases are either "direct subjective" or "expressive subjective". "Direct subjective" expressions are explicit mentions of a private state (Quirk et al., 1985) and are much easier to classify. "Expressive subjective" phrases are indirect or implicit mentions of private states and therefore are harder to classify. Approximately one third of the phrases we extracted were direct subjective with non-neutral expressive intensity whereas the rest of the phrases were expressive subjective. In terms of polarity, there were 2779 positive, 6471 negative and 7993 neutral expressions. Our Gold Standard is the manual annotation tag given to phrases in the corpus.
DAL
DAL is an English language dictionary built to measure emotional meaning of texts. The samples employed to build the dictionary were gathered from different sources such as interviews, adolescents' descriptions of their emotions and university students' essays. Thus, the 8742 word dictionary is broad and avoids bias from any one particular source. Each word is given three kinds of scores (pleasantness -also called evaluation, ee, activeness, aa and imagery, ii) on a scale of 1 (low) to 3 (high). Pleasantness is a measure of polarity. For example, in Table 1, affection is given a pleasantness score of 2.77 which is closer to 3.0 and is thus a highly positive word. Likewise, activeness is a measure of the activation or arousal level of a word, which is apparent from the activeness scores of slug and energetic in the table. The third score, imagery, is a measure of the ease with which a word forms a mental picture. For example, affect cannot be imagined easily and therefore has a score closer to 1, as opposed to flower which is a very concrete and therefore has an imagery score of 3.
A notable feature of the dictionary is that it has different scores for various inflectional forms of a word ( affect and affection) and thus, morphological parsing, and the possibility of resulting errors, is avoided. Moreover, Cowie et al., (2001) showed that the three scores are uncorrelated; this implies that each of the three scores provide complementary information. The dictionary has previously been used for detecting deceptive speech (Hirschberg et al., 2005) and recognizing emotion in speech (Athanaselis et al., 2006).
The Polarity Classification Framework
In this section, we present our polarity classification framework. The system takes a sentence marked with a subjective phrase and identifies the most likely contextual polarity of this phrase. We use a logistic regression classifier, implemented in Weka, to perform two types of classification: Three way (positive, negative, vs. neutral) and binary (positive vs. negative). The features we use for classification can be broadly divided into three categories: I. Prior polarity features computed from DAL and augmented using WordNet (Section 5.1). II. lexical features including POS and word n-gram features (Section 5.3), and III. the combination of DAL scores and syntactic features to allow both n-gram analysis and polarity features of neighbors (Section 5.4).
Scoring based on DAL and WordNet
DAL is used to assign three prior polarity scores to each word in a sentence. If a word is found in DAL, scores of pleasantness (ee), activeness (aa), and imagery (ii) are assigned to it. Otherwise, a list of the word's synonyms and antonyms is created using WordNet. This list is sequentially traversed until a match is found in DAL or the list ends, in which case no scores are assigned. For example, astounded, a word absent in DAL, was scored by using its synonym amazed. Similarly, in-humane was scored using the reverse polarity of its antonym humane, present in DAL. These scores are Z-Normalized using the mean and standard deviation measures given in the dictionary's manual (Whissel, 1989). It should be noted that in our current implementation all function words are given zero scores since they typically do not demonstrate any polarity. The next step is to boost these normalized scores depending on how far they lie from the mean. The reason for doing this is to be able to differentiate between phrases like "fairly decent advice" and "excellent advice". Without boosting, the pleasantness scores of both phrases are almost the same. To boost the score, we multiply it by the number of standard deviations it lies from the mean.
After the assignment of scores to individual words, we handle local negations in a sentence by using a simple finite state machine with two states: RETAIN and INVERT. In the INVERT state, the sign of the pleasantness score of the current word is inverted, while in the RETAIN state the sign of the score stays the same. Initially, the first word in a given sentence is fed to the RETAIN state. When a negation (e.g., not, no, never, cannot, didn't) is encountered, the state changes to the INVERT state. While in the INVERT state, if 'but' is encountered, it switches back to the RETAIN state. In this machine we also take care of "not only" which serves as an intensifier rather than negation . To handle phrases like "no better than evil" and "could not be clearer", we also switch states from INVERT to RETAIN when a comparative degree adjective is found after 'not'. For example, the words in phrase in Table (2) are given positive pleasantness scores labeled with positive prior polarity. We observed that roughly 74% of the content words in the corpus were directly found in DAL. Synonyms of around 22% of the words in the corpus were found to exist in DAL. Antonyms of only 1% of the words in the corpus were found in DAL. Our system failed to find prior semantic orientations of roughly 3% of the total words in the corpus. These were rarely occurring words like apartheid, apocalyptic and ulterior. We assigned zero scores for these words.
In our system, we assign three DAL scores, using the above scheme, for the subjective phrase in a given sentence. The features are (1) µ ee , the mean of the pleasantness scores of the words in the phrase, (2) µ aa , the mean of the activeness scores of the words in the phrase, and similarly (3) µ ii , the mean of the imagery scores.
Norm
We gave each phrase another score, which we call the norm, that is a combination of the three scores from DAL. Cowie et al. (2001) suggest a mechanism of mapping emotional states to a 2-D continuous space using an Activation-Evaluation space (AE) representation. This representation makes use of the pleasantness and activeness scores from DAL and divides the space into four quadrants: "delightful", "angry", "serene", and "depressed". Whissel (2008), observes that tragedies, which are easily imaginable in general, have higher imagery scores than comedies. Drawing on these approaches and our intuition that neutral expressions tend to be more subjective, we define the norm in the following equation (1).
norm = √ ee 2 + aa 2 ii(1)
Words of interest to us may fall into the following four broad categories:
1. High AE score and high imagery: These are words that are highly polar and less subjective (e.g., angel and lively).
2. Low AE score and low imagery: These are highly subjective neutral words (e.g., generally and ordinary).
3. High AE score and low imagery: These are words that are both highly polar and subjective (e.g., succeed and good).
4.
Low AE score and high imagery: These are words that are neutral and easily imaginable (e.g., car and door).
It is important to differentiate between these categories of words, because highly subjective words may change orientation depending on context; less subjective words tend to retain their prior orientation. For instance, in the example sentence from ., the underlined phrase seems negative, but in the context it is positive. Since a subjective word like succeed depends on "what" one succeeds in, it may change its polarity accordingly. In contrast, less subjective words, like angel, do not depend on the context in which they are used; they evoke the same connotation as their prior polarity.
(3) They haven't succeeded and will never succeed in breaking the will of this valiant people.
As another example, AE space scores of goodies and good turn out to be the same. What differentiates one from the another is the imagery score, which is higher for the former. Therefore, value of the norm is lower for goodies than for good. Unsurprisingly, this feature always appears in the top 10 features when the classification task contains neutral expressions as one of the classes.
Lexical Features
We extract two types of lexical features, part of speech (POS) tags and n-gram word features. We count the number of occurrences of each POS in the subjective phrase and represent each POS as an integer in our feature vector. 3 For each subjective phrase, we also extract a subset of unigram, bigrams, and trigrams of words (selected automatically, see Section 6). We represent each n-gram feature as a binary feature. These types of features were used to approximate standard n-gram language modeling (LM). In fact, we did experiment with a standard trigram LM, but found that it did not improve performance. In particular, we trained two LMs, one on the polar subjective phrases and another on the neutral subjective phrases. Given a sentence, we computed two perplexities of the two LMs on the subjective phrase in the sentence and added them as features in our feature vectors. This procedure provided us with significant improvement over a chance baseline but did not outperform our current system. We speculate that this was caused by the split of training data into two parts, one for training the LMs and another for training the classifier. The resulting small quantity of training data may be the reason for bad performance. Therefore, we decided to back off to only binary n-gram features as part of our feature vector.
Syntactic Features
In this section, we show how we can combine the DAL scores with syntactic constituents. This process involves two steps. First, we chunk each sentence to its syntactic constituents (NP, VP, PP, JJP, and Other) using a CRF Chunker. 4 If the marked-up subjective phrase does not contain complete chunks (i.e., it partially overlaps with other chunks), we expand the subjective phrase to include the chunks that it overlaps with. We term this expanded phrase as the target phrase, see Figure 1.
Second, each chunk in a sentence is then assigned a 2-D AE space score as defined by Cowie et al., (2001) by adding the individual AE space scores of all the words in the chunk and then normalizing it by the number of words. At this point, we are only concerned with the polarity of the chunk (i.e., whether it is positive or negative or neutral) and imagery will not help in this task; the AE space score is determined from pleasantness and activeness alone. A threshold, determined empirically by analyzing the distributions of positive (pos), negative (neg) and neutral (neu) expressions, is used to define ranges for these classes of expressions. This enables us to assign each chunk a prior semantic polarity. Having the semantic orientation (positive, negative, neutral) and phrasal tags, the sentence is then converted to a sequence of encodings [P hrasal − T ag] polarity . We mark each phrase that we want to classify as a "target" to differentiate it from the other chunks and attach its encoding. As mentioned, if the target phrase partially overlaps with chunks, it is simply expanded to subsume the chunks. This encoding is illustrated in Figure 1.
After these two steps, we extract a set of features that are used in classifying the target phrase. These include n-grams of chunks from the all sentences, minimum and maximum pleasantness scores from the chunks in the target phrase itself, and the syntactic categories that occur in the context of the target phrase. In the remainder of this section, we describe how these features are extracted.
We extract unigrams, bigrams and trigrams of chunks from all the sentences. For example, we may extract a bigram from Figure ! ! " ! "! # Figure 1: Converting a sentence with a subjective phrase to a sequence of chunks with their types and polarities n-grams, for the sentence containing the target phrase, we add binary values in our feature vector such that the value is 1 if the sentence contains that chunk n-gram. We also include two features related to the target phrase. The target phrase often consists of many chunks. To detect if a chunk of the target phrase is highly polar, minimum and maximum pleasantness scores over all the chunks in the target phrase are noted.
In addition, we add features which attempt to capture contextual information using the prior semantic polarity assigned to each chunk both within the target phrase itself and within the context of the target phrase. In cases where the target phrase is in the beginning of the sentence or at the end, we simply assign zero scores. Then we compute the frequency of each syntactic type (i.e., NP, VP, PP, JJP) and polarity (i.e., positive, negative, neutral) to the left of the target, to the right of the target and for the target. This additional set of contextual features yields 36 features in total: three polarities: {positive, negative, neutral} * three contexts: {left, target, right} * four chunk syntactic types: {NP, VP, PP, JJP}.
The full set of features captures different types of information. N-grams look for certain patterns that may be specific to either polar or neutral sentiments. Minimum and maximum scores capture information about the target phrase standalone. The last set of features incorporate information about the neighbors of the target phrase. We performed feature selection on this full set of n-gram related features and thus, a small subset of these n-gram related features, selected automatically (see section 6) were used in the experiments.
Experiments and Results
Subjective phrases from the MPQA corpus were used in 10-fold cross-validation experiments. The MPQA corpus includes gold standard tags for each phrase. A logistic classifier was used for two polarity classification tasks, positive versus negative versus neutral and positive versus negative. We report accuracy, and F-measure for both balanced and unbalanced data. Table 3 shows results for a 3-way classifier. For the balanced data-set, each class has 2799 instances and hence the chance baseline is 33%. For the unbalanced data-set, there are 2799 instances of positive, 6471 instances of negative and 7993 instances of neutral phrases and thus the baseline is about 46%. Results show that the accuracy increases as more features are added. It may be seen from the table that prior polarity scores do not do well alone, but when used in conjunction with other features they play an important role in achieving an accuracy much higher than both baselines (chance and lexical n-grams). To re- confirm if prior polarity scores add value, we experimented by using all features except the prior polarity scores and noticed a drop in accuracy by about 4%. This was found to be true for the other classification task as well. The neg . We thus learned n-gram patterns that are characteristic of neutral expressions (the just mentioned bigram and the first of the unigrams) as well as a pattern found mostly in negative expressions (the latter unigram). It was surprising to find another top chunk feature, the bigram "[Other] target neu [N P ] neg " (i.e., a neutral chunk of syntactic type "Other" preceding a negative noun phrase), present in neutral expressions six times more than in polar expressions. An instance where these chunk features could have been responsible for the correct prediction of a target phrase is shown in Figure 2. Figure 2(a) shows an example sentence from the MPQA corpus, which has three annotated subjective phrases. The manually labeled polarity of phrases (A) and (C) is negative and that of (B) is neutral. Figure 2(b) shows the relevant chunk bigram which is used to predict the contextual polarity of the target phrase (B).
Positive versus Negative versus Neutral
It was interesting to see that the top 10 features consisted of all categories (i.e., prior DAL scores, lexical n-grams and POS, and syntactic) of features. In this and the other experiment, pleasantness, activation and the norm were among the top 5 features. We ran a significance test to show the importance of the norm feature in our classification task and observed that it exerted a significant increase in accuracy (2.26%, p-value = 1.45e-5). Table 4 shows results for positive versus negative classification. We show results for both balanced and unbalanced data-sets. For balanced, there are 2779 instances of each class. For the unbalanced data-set, there are 2779 instances of positive and 6471 instances of neutral, thus our chance baseline is around 70%. As in the earlier classification, accuracy and F-measure increase as we add features. While the increase of adding the chunk features, for example, is not as great as in the previous classification, it is nonetheless significant (p-value = 0.0018) in this classification task. The smaller increase lends support to our hypothesis that polar expressions tend to be less subjective and thus are less likely to be affected by contextual polarity. Another thing that supports our hypothesis that neutral expressions are more subjective is the fact that the rank of imagery (ii), dropped significantly in this classification task as compared to the previous classification task. This implies that imagery has a much lesser role to play when we are dealing with non-neutral expressions.
Positive versus Negative
Conclusion and Future Work
We present new features (DAL scores, norm scores computed using DAL, n-gram over chunks with polarity) for phrasal level sentiment analysis. They work well and help in achieving high accuracy in a three-way classification of positive, negative and neutral expressions. We do not require any manual intervention during feature selection, and thus our system is fully automated. We also introduced a 3-D representation that maps different classes to spatial coordinates.
It may seem to be a limitation of our system that it requires accurate expression boundaries. However, this is not true for the following two reasons: first, declare that while marking the span of subjective expressions and hand annotating the MPQA corpus, the annotators were not trained to mark accurate expression boundaries. The only constraint was that the subjective expression should be within the mark-ups for all annotators. Second, we expanded the marked subjective phrase to subsume neighboring phrases at the time of chunking.
A limitation of our scoring scheme is that it does not handle polysemy, since words in DAL are not provided with their parts of speech. Statistics show, however, that most words occurred with primarily one part of speech only. For example, "will" occurred as modal 1272 times in the corpus, whereas it appeared 34 times as a noun. The case is similar for "like" and "just", which mostly occur as a preposition and an adverb, respectively. Also, in our state machine, we haven't accounted for the impact of connectives such as "but" or "although"; we propose drawing on work in argumentative orientation to do so ( (Anscombre and Ducrot, 1983); (Elhadad and McKeown, 1990)).
For future work, it would be interesting to do subjectivity and intensity classification using the same scheme and features. Particularly, for the task of subjectivity analysis, we speculate that the imagery score might be useful for tagging chunks with "subjective" and "objective" instead of positive, negative, and neutral.
1 of [V P ] neu followed by [P P ] target neg . Similar to the lexical
Figure 2 :
2(a) An example sentence with three annotated subjective phrases in the same sentence. (b) Part of the sentence with the target phrase (B) and their chunks with prior polarities.
Table 2 :
2Example of scoring scheme using DAL
Table 3 :
3Results of 3 way classification (Positive, Negative,
and Neutral). In the unbalanced case, majority class baseline
is 46.3% (*F-Measure).
Feature Types
Accuracy Pos.* Neg.*
Chance baseline
50%
-
-
N-gram baseline 73.21%
0.736 0.728
DAL scores only 77.02%
0.763 0.728
+ POS
79.02%
0.788 0.792
+ Chunks
80.72%
0.807 0.807
+ N-gram (all)
82.32%
0.802 0.823
All (unbalanced) 84.08%
0.716 0.889
Table 4 :
4Positive vs. Negative classification results. Baseline
is the majority class. In the unbalanced case, majority class
baseline is 69.74%. (* F-Measure)
neighboring constituents) are used in conjunction with prior polarity scores and part of speech features. 5 This improvement may be explained by the following observation. The bigram "[Other] target neu [N P ] neu " was selected as a top feature by the Chi-square feature selector. So were unigrams, [Other] target neu and [Other] targettable shows
that parts of speech and lexical n-grams are good
features. A significant improvement in accuracy
(over 4%, p-value = 4.2e-15) is observed when
chunk features (i.e., n-grams of constituents and
polarity of
We assign polarity to phrases based on Wiebe; the polarity of all examples shown here is drawn from annnotations in the MPQA corpus. Clearly the assignment of polarity chosen in this corpus depends on general cultural norms.
http://www.wjh.harvard.edu/ inquirer
We use the Stanford Tagger to assign parts of speech tags to sentences.(Toutanova and Manning, 2000)
Xuan-Hieu Phan, "CRFChunker: CRF English Phrase Chunker", http://crfchunker.sourceforge.net/, 2006.
We use the binomial test procedure to test statistical significance throughout the paper.
AcknowledgmentsThis work was supported by the National Science Foundation under the KDD program. Any opinions, ndings, and conclusions or recommendations expressed in this paper are those of the authors and do not necessarily reect the views of the National Science Foundation. score.We would like to thank Julia Hirschberg for useful discussion. We would also like to acknowledge Narayanan Venkiteswaran for implementing parts of the system and Amal El Masri, Ashleigh White and Oliver Elliot for their useful comments.
Philosophie et langage. l'argumentation clans la langue. J C Anscombre, O Ducrot, Pierre MardagaBruxellesJ.C. Anscombre and O. Ducrot. 1983. Philosophie et langage. l'argumentation clans la langue. Bruxelles: Pierre Mardaga.
Automatic recognition of emotionally coloured speech. T Athanaselis, S Bakamidis, L Dologlou, In Proceedings of World Academy of Science, Engineering and Technology. 12T. Athanaselis, S. Bakamidis, , and L. Dologlou. 2006. Automatic recognition of emotionally coloured speech. In Proceedings of World Academy of Sci- ence, Engineering and Technology, volume 12, ISSN 1307-6884.
Emotion recognition in human-computer interaction. R Cowie, E Douglas-Cowie, N Tsapatsoulis, G Votsis, S Kollias, W Fellenz, IEEE Signal Processing Magazine. 1R. Cowie, E. Douglas-Cowie, N. Tsapatsoulis, G. Vot- sis, S. Kollias, and W. Fellenz et al. 2001. Emo- tion recognition in human-computer interaction. In IEEE Signal Processing Magazine, 1, 32-80.
Generating connectives. M Elhadad, K R Mckeown, Proceedings of the 13th conference on Computational linguistics. the 13th conference on Computational linguisticsMorristown, NJ, USAAssociation for Computational LinguisticsM. Elhadad and K. R. McKeown. 1990. Generating connectives. In Proceedings of the 13th conference on Computational linguistics, pages 97-101, Mor- ristown, NJ, USA. Association for Computational Linguistics.
Wordnet, an electronic lexical database. C Fellbaum, MIT pressC. Fellbaum. 1998. Wordnet, an electronic lexical database. In MIT press.
Predicting the semantic orientation of adjectives. V Hatzivassiloglou, K Mckeown, Proceedings of ACL. ACLV. Hatzivassiloglou and K. McKeown. 1997. Predict- ing the semantic orientation of adjectives. In Pro- ceedings of ACL.
Distinguishing deceptive from non-deceptive speech. J Hirschberg, S Benus, J M Brenier, F Enos, S Friedman, Proceedings of Interspeech. InterspeechJ. Hirschberg, S. Benus, J.M. Brenier, F. Enos, and S. Friedman. 2005. Distinguishing deceptive from non-deceptive speech. In Proceedings of Inter- speech, 1833-1836.
Mining and summarizing customer reviews. M Hu, B Liu, Proceedings of KDD. KDDM. Hu and B. Liu. 2004. Mining and summarizing customer reviews. In Proceedings of KDD.
Words with attitude. J Kamps, M Marx, 1st International WordNet Conference. J. Kamps and M. Marx. 2002. Words with attitude. In 1st International WordNet Conference.
Determining the sentiment of opinions. S M Kim, E Hovy, In In ColingS. M. Kim and E. Hovy. 2004. Determining the senti- ment of opinions. In In Coling.
Sentiment analysis: Capturing favorability using natural language processing. T Nasukawa, J Yi, Proceedings of K-CAP. K-CAPT. Nasukawa and J. Yi. 2003. Sentiment analysis: Capturing favorability using natural language pro- cessing. In Proceedings of K-CAP.
A sentimental education: Sentiment analysis using subjectivity analysis using subjectivity summarization based on minimum cuts. B Pang, L Lee, Proceedings of ACL. ACLB. Pang and L. Lee. 2004. A sentimental education: Sentiment analysis using subjectivity analysis using subjectivity summarization based on minimum cuts. In Proceedings of ACL.
A comprehensive grammar of the english language. R Quirk, S Greenbaum, G Leech, J Svartvik, LongmanNew YorkR. Quirk, S. Greenbaum, G. Leech, and J. Svartvik. 1985. A comprehensive grammar of the english lan- guage. Longman, New York.
Learning extraction patterns for subjective expressions. E Riloff, J Wiebe, Proceedings of EMNLP. EMNLPE. Riloff and J. Wiebe. 2003. Learning extraction pat- terns for subjective expressions. In Proceedings of EMNLP.
Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. K Toutanova, C D Manning, Proceedings of the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000). the Joint SIGDAT Conference on Empirical Methods in Natural Language Processing and Very Large Corpora (EMNLP/VLC-2000)K. Toutanova and C. D. Manning. 2000. Enriching the knowledge sources used in a maximum entropy part-of-speech tagger. In Proceedings of the Joint SIGDAT Conference on Empirical Methods in Nat- ural Language Processing and Very Large Corpora (EMNLP/VLC-2000), pp. 63-70.
Thumbs up or thumbs down? semantic orientation applied to unsupervised classification of reviews. P Turney, Proceedings of ACL. ACLP. Turney. 2002. Thumbs up or thumbs down? seman- tic orientation applied to unsupervised classification of reviews. In Proceedings of ACL.
The dictionary of affect in language. C M Whissel, Emotion: theory research and experience. R. Plutchik and H. KellermanLondonAcad. Press4C. M. Whissel. 1989. The dictionary of affect in lan- guage. In R. Plutchik and H. Kellerman, editors, Emotion: theory research and experience, volume 4, Acad. Press., London.
A psychological investigation of the use of shakespeare=s emotional language: The case of his roman tragedies. C M Whissell, Edwin Mellen PressLewiston, NYC. M. Whissell. 2008. A psychological investiga- tion of the use of shakespeare=s emotional language: The case of his roman tragedies. In Edwin Mellen Press., Lewiston, NY.
Annotating expressions of opinions and emotions in language. J Wiebe, T Wilson, C Cardie, Language Resources and Evaluation. 39J. Wiebe, T. Wilson, and C. Cardie. 2005. Annotating expressions of opinions and emotions in language. In Language Resources and Evaluation, volume 39, issue 2-3, pp. 165-210.
Recognizing contextual polarity in phrase level sentiment analysis. T Wilson, J Wiebe, P Hoffman, Proceedings of ACL. ACLT. Wilson, J. Wiebe, and P. Hoffman. 2005. Recog- nizing contextual polarity in phrase level sentiment analysis. In Proceedings of ACL.
Sentiment analyzer: Extracting sentiments about a given topic using natural language processing techniques. J Yi, T Nasukawa, R Bunescu, W Niblack, Proceedings of IEEE ICDM. IEEE ICDMJ. Yi, T. Nasukawa, R. Bunescu, and W. Niblack. 2003. Sentiment analyzer: Extracting sentiments about a given topic using natural language processing tech- niques. In Proceedings of IEEE ICDM.
Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. H Yu, V Hatzivassiloglou, Proceedings of EMNLP. EMNLPH. Yu and V. Hatzivassiloglou. 2003. Towards an- swering opinion questions: Separating facts from opinions and identifying the polarity of opinion sen- tences. In Proceedings of EMNLP. |
52,010,259 | Dynamic Feature Selection with Attention in Incremental Parsing | One main challenge for incremental transition-based parsers, when future inputs are invisible, is to extract good features from a limited local context. In this work, we present a simple technique to maximally utilize the local features with an attention mechanism, which works as contextdependent dynamic feature selection. Our model learns, for example, which tokens should a parser focus on, to decide the next action. Our multilingual experiment shows its effectiveness across many languages. We also present an experiment with augmented test dataset and demonstrate it helps to understand the model's behavior on locally ambiguous points.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/ | [
7356547,
33542057,
2948298,
246647,
1998416,
6205777,
1689426,
11616343,
9278872,
10901371,
1918428,
269533,
6278207
] | Dynamic Feature Selection with Attention in Incremental Parsing
August 20-26. 2018
Ryosuke Kohita ryosuke.kohita1@ibm.com
†IBM Research §Artificial Intelligence Research Center
School of Information Science
National Institute of Advanced Industrial Science and Technology (AIST)
Nara Institute of Science and Technology (NAIST)
Hiroshi Noji hiroshi.noji@aist.go.jp
†IBM Research §Artificial Intelligence Research Center
School of Information Science
National Institute of Advanced Industrial Science and Technology (AIST)
Nara Institute of Science and Technology (NAIST)
Yuji Matsumoto
†IBM Research §Artificial Intelligence Research Center
School of Information Science
National Institute of Advanced Industrial Science and Technology (AIST)
Nara Institute of Science and Technology (NAIST)
Dynamic Feature Selection with Attention in Incremental Parsing
Proceedings of the 27th International Conference on Computational Linguistics
the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAugust 20-26. 2018785
One main challenge for incremental transition-based parsers, when future inputs are invisible, is to extract good features from a limited local context. In this work, we present a simple technique to maximally utilize the local features with an attention mechanism, which works as contextdependent dynamic feature selection. Our model learns, for example, which tokens should a parser focus on, to decide the next action. Our multilingual experiment shows its effectiveness across many languages. We also present an experiment with augmented test dataset and demonstrate it helps to understand the model's behavior on locally ambiguous points.This work is licensed under a Creative Commons Attribution 4.0 International License. License details: http:// creativecommons.org/licenses/by/4.0/
Introduction
This paper explores better feature representations for incremental dependency parsing. We focus on a system that builds a parse tree incrementally receiving each word of a sentence, which is crucial for interactive systems to achieve fast response or human-like behavior such as understanding from partial input (Baumann, 2013). The most natural way to achieve incremental parsing is using a transition system (Nivre, 2008), and for such parsers, the main challenge is to choose an appropriate action with only the local context information. While some recent transition-based parsers alleviate this difficulty by exploiting the entire input sentence with recurrent neural networks (Kiperwasser and Goldberg, 2016;Shi et al., 2017), one possible disadvantage is to require that all inputs are visible from the beginning, which should be problem when we try more strict incremental conditions such as simultaneous translation. Therefore there are still demands to explore the effective way to extract better feature representation from incomplete inputs.
In this paper, we incorporate a simple attention mechanism (Bahdanau et al., 2015) with an incremental parser and investigate its effectiveness during the feature extraction. Attention mechanism itself has firstly succeeded in machine translation, capturing relative importances of tokens on a certain step for a proper output (Bahdanau et al., 2015;Luong et al., 2015). The characteristic to weight on some features automatically and effectively can be applied to various tasks such as seq-to-seq parsing model (Vinyals et al., 2015), text summarization (Rush et al., 2015), dialogue generation (Shang et al., 2015), image captioning (Xu et al., 2015) in which the systems can enjoy performance gain by attending to specific clues depending on a given situation. We can also expect this behavior is helpful to fix the error which transition-based parsers often commits due to local ambiguities.
John on Monday introduces advisors. 40nmod 40obl Figure 1: A locally ambiguous sentence. "Monday" should be analyzed as oblique of "introduce" while tends to be analyzed as a noun modifier of "John".
Figure 1 shows our motivating example, on which the standard transition-based parser fails and attaches "Monday" to "John", since on the POS level and the usual behavior of "on", this sequence is misleading as a typical noun phrase. By introducing attention on feature extraction, we expect the model to attend to important tokens, in this case "Monday", which is not likely to attach to a person and suggests the parser to anticipate the following predicate. Our technique can be applied to any models with feed-forward networks on concatenated feature embeddings, and in this work, we apply it on the standard transition-based parser of Chen and Manning (2014).
On the multilingual experiment on Universal Dependencies (UD) 2.0 (Zeman et al., 2017), we find our attention brings performance gain for most languages. To inspect the model's behavior, we also introduce a controlled experiment with manually created data. For this experiment, we prepare a set of sentences for which the parser must attend to the key points for correct disambiguation, as in Figure 1, and see whether the model behaves as expected. There we give detailed error analysis to suggest what makes it difficult to solve the local ambiguities and how attention achieves it. This type of analysis is common in psycholinguistics (Levy, 2008), and a similar idea has recently begun to be explored in NLP neural models (Shekhar et al., 2017).
Model
Base model
Our base model is a transition-based neural parser of Chen and Manning (2014). 1 For each step, this parser first creates feature vectors of words (x w ), POS tags (x p ), and labels (x l ), each of which is a concatenation of embeddings around a stack and a buffer. These vectors are transformed with corresponding weights, i.e., h = W w x w + W p x p + W l x l + b, followed by nonlinearity. A next softmax layer then provides action probabilities.
Although this method is actually old, the approach which creates the feature vector from independent embeddings becomes useful in our second experiment inspecting our attention behaviors (See section 3.3 in detail). In addition, UDPipe (Straka et al., 2016) which is the baseline parser in the latest shared task (Zeman et al., 2017) also adopts this approach and holds good performance compared to others using recent techniques.
Attention on local features
We introduce attention in feature computation from the input embeddings to h. Note that three components W w x w , W p w p , and W l x l are independent; in the following we focus on just one part, abstracted by Wx, and describe how attention is applied for this computation.
Our attention calculates the importance of input elements. First, note that x is a concatenation of embeddings of input elements, and when the number of elements is n, W can also be divided into n blocks as in Figure 2. When these parts are denoted by W i and x i , Wx = i W i x i holds. We define c i = W i x i , which corresponds to the hidden representation for the i-th input element.
Our core idea is to apply attention on decomposed hidden vectors {c i }. Using attention vector a = (a 1 , a 2 , · · · , a n ), the new hidden representation becomes h g = i a i c i . We obtain attention a i using c i and parameters q as follows:
a i = exp(σ(q · c i )) n i=1 exp(σ(q · c i )) ,
where σ is a sigmoid function. We use different attention parameters q w , q p , and q l for word, POS, and label inputs, respectively.
Experiments
Our first experiment is on the multilingual UD treebanks used in CoNLL 2017 shared task (Zeman et al., 2017). In addition to this, we present another experiment using augmented test data. This is a set of sentences for which there is a key token for correct disambiguation. We will see whether our model is capable of disambiguating them by attending to the critical points.
For both experiments, our baselines are our parser without attention, and UDPipe v1.1 (Straka et al., 2016), which was the state-of-the-art among transition-based parsers with local features on the shared task. 2
Parser
We extract features from the same positions as Chen and Manning (2014); top three tokens from the stack and the buffer, the first and second leftmost or rightmost children of the top two tokens on the stack, and the leftmost or rightmost children of leftmost or rightmost children of the top two tokens on the stack. However, from each position we extract more information such as LEMMA (see also footnote 1). The embedding sizes are: 50 dimensions for WORD, 20 dimensions for LEMMA, UPOS, XPOS, FEATS, and DEPREL. We also extract 32 dimensional character encoding of a token by bi-LSTMs , though we do not apply attention on this. The size of the hidden dimension is 200, on which we apply 50% dropout. We use pre-trained embeddings used in the baseline UDPipe. 3 To handle nonprojectivity, we employ the arc-standard swap algorithm (Nivre et al., 2009). We also use beam search with width 5. To learn the representation for unknown words, we stochastically replace singletons with the dummy token . These hyperparameters are the same across languages except Kazakh. This is apart from UDPipe, which tunes the setting for each language. For Kazakh, which is extremely small, we find increasing the model size as 100, 50, and 50 dimensions for WORD, UPOS, and XPOS embeddings works well so we choose this setting.
Multilingual evaluation
We use 63 treebanks in 45 languages on Universal Dependencies v2.0 (Nivre et al., 2017), with the same data split as the setting of official UDPipe. 3 We evaluate F1 LAS of each treebank and their macro average. For the development sets, we use the gold preprocessed data while for the test sets, we parse the raw text preprocessed by UDPipe.
With respect to the macro averaged score, in the Table 1 below, we can see that our model without attention (w/o Att.) is comparable to UDPipe; with attention, it outperforms both. When inspecting in detail, we see that our attention improves the scores on 54 treebanks on the development set and 57 treebanks on the test set. We also see that the treebanks for which our attention degrades the performance are relatively small, e.g., en partut (1,035 sentences) and hu (864 sentences), which indicates our attention may be more data-hungry.
Augmented data evaluation
Why does attention help for disambiguation? To inspect this, now we perform a controlled experiment by parsing a set of sentences that for correct disambiguation may require attending to some specific points. We present two different sets on English, which differ in the points where the model should attend.
Oblique vs. noun modifier The first set is related to the difficulty of the left of Figure 3, where as we discussed the parser may be confused and attach "Monday" to "John" as a noun modifier since the POS sequence of "John on Monday" and the usual behavior of "on" are typical for a noun phrase. The right of Figure 3 shows the step where the parser must decide the head of "Monday"; here the correct action is shift and right-arc leads to the wrong analysis. At this step, though the important token for a typical NP is "on", we expect the parser to focus more on "Monday", which is likely to attach to a subsequent predicate as oblique.
John on Monday introduces advisors. To inspect the model's ability for correctly handling these ambiguities, we prepare 14 pairs of sentences. 4 Each pair differs minimally, as in "John on Monday introduces advisors" and "John on a balcony introduces advisors", in which the former should be analyzed as oblique (obl) while the latter as a modifier (nmod). Table 2 contrasts the inputs to parsers when gold preprocessing is given, where the differences always appear at third and forth tokens ("Monday" in obl vs. "a" and "balcony" in nmod). All words in these items occur at least one in the training corpus, therefore no unknown words are used. Table 2: Minimal pair in oblique ("John on Monday introduces advisors") vs. noun modifier ("John on a balcony introduces advisors") experiment
The result is summarized in Table 3, where we count the number of sentences on which the parser outputs are perfect. We can see that nmod sentences are analyzed near perfectly, which is intuitive as this structure is typical. obl sentences are more difficult, but the system with attention is capable of handling them. The other systems fail, even assuming gold tags. For pred tags, all systems receive the same inputs tagged by UDPipe. The accuracy for obl decreases, and we find the errors are due to incorrect POS tags for the predicate at 5th word, which are sometimes tagged as a noun. This suggests our attention parser can handle these local ambiguities unless a crucial tag error occurs, while the other systems cannot at all.
Finally, we show in Figure 4 the attention weights on features at a branching step (The right of Figure 3) for the sample sentences in Table 2. We can see that for the obl sentence the parser attends more on the key tokens of "Monday" on the stack and "introduces" on the buffer. This suggests the attention mechanism works as we expected and its behavior matches our intuition.
Object complement vs. that clause The second set is about different ambiguities from the previous experiment; an example pair is shown in the left of Figure 5, where to correctly parse the lower one, the parser must recognize the implicit that clause (that), rather than an object-complement (oc). The right of the figure shows the configuration on which the parser must choose the structure, by shift or right-arc.
The key token for correct analysis is at the last, which can be accessed as the second token on the buffer. Figure 5: The representative pair for the second set: object-complement (left above) vs. that clause (left below) and the branching configuration (right).
We prepare 24 pairs of sentences. Table 4 shows an example of differences of a pair. In these sentences, tokens from third to fifth differ. Note that contrary to Table 2 these two condition are distinctive with POS tags (e.g., VBN or VBD), so the main challenge is whether the model can attend to the key tokens when the predicted noisy tags are used. Table 4: Minimal pair in object-complement ("John found it ignored before") vs. that-clause experiment ("John found it ignored comments") Table 5 summarizes the results. As we expected, all systems succeed with gold tags, but perform badly in particular on that sentences, with predicted tags. Inspecting errors, we find that this is due to error propagation from an incorrect tag for it (3rd token), on which UDPipe assigns Acc(suative) feature due to that-omission. By this error, another error is induced on the POS tag of the next token (e.g., ignored), which becomes participle or adjective. These erroneous tags make it hard for parsers to recognize an implicit that.
Though all models fail, we notice that for 30% of sentences (7/24), our attention parser recognizes the existence of that-clause, by wrongly analyzing the last noun (e.g., comments) as the head of the clause (it becomes nsubj of the noun). Inspecting the attention weights for succeeded and failed cases (Figure 6), we find the last noun is slightly attended more in the succeeded case (above), which may lead the parser to predict a ccomp arc (but to a wrong word).
Gold tags
Pred tags oc that oc that UDPipe 23 / 24 24 / 24 19 / 24 0 (0) / 24 w/o Att. 24 / 24 24 / 24 16 / 24 1 (2) / 24 w/ Att. 23 / 24 24 / 24 18 / 24 1 (7) / 24 Table 5: # of correct sentences for oc vs. that. Numbers in brackets mean the cases where that-omission is correctly predicted but other errors exist (see body). Figure 6: Attention weights on that sentences when that-omission is predicted (above) or failed (below).
Conclusion
We have presented an simple attention mechanism for dynamic feature selection, which can be applied to any feed-forward networks on concatenated feature embeddings. When applying to an incremental parser, the parser performance increased across many languages. Also our augmented-data experiment showed that the parser successfully learns where to focus on each context, and becomes more robust to erroneously tagged sentences.
Figure 2 :
2Our attention mechanism on the decomposed hidden vectors c i , obtained by W i x i .
Figure 3 :
3Left -An local ambiguous sentence, reprint ofFigure 1. Right -The configuration on which the parser must decide whether "Monday" works as an oblique or a modifier.
Figure 4 :
4Attention weights on obl sentence (above) and nmod sentence (below). s i and b i are the i-th top-most position on the stack and buffer, respectively. lc i and rc i are their (inward) i-th left and right child.
The test set of it partut treebank was excluded in the shared task as well. b The UDPipe's official score is 68.35 because it includes the scores for extra treebanks in the shared task, called surprise language.Development
Test
Treebank
UDPipe w/o Att. w/ Att. UDPipe w/o Att. w/ Att.
ar
78.11
78.73
79.84
65.30
64.72
65.43
bg
87.56
86.90
87.35
83.64
83.23
83.44
ca
88.35
87.52
88.28
85.39
84.66
85.43
cs
88.19
86.29
87.06
82.87
81.15
82.27
cs cac
86.57
86.13
87.01
82.46
81.48
82.15
cs cltt
78.95
79.96
79.22
71.64
72.08
72.56
cu
79.44
79.46
81.69
62.76
63.19
65.40
da
81.13
80.57
82.01
73.38
73.16
74.22
de
84.06
83.58
84.25
69.11
67.54
68.66
el
83.71
84.59
83.88
79.26
79.56
80.03
en
85.82
84.96
85.29
75.84
74.65
75.06
en lines
80.51
80.40
80.46
72.94
73.75
74.11
en partut
81.29
81.49
79.95
73.64
73.37
73.15
es
86.69
86.17
86.66
81.47
80.55
81.58
es ancora
87.55
86.98
87.89
83.78
83.59
84.39
et
76.37
74.00
75.63
58.79
57.62
58.74
eu
76.88
76.31
77.93
69.15
68.24
70.29
fa
85.16
82.69
83.60
79.24
77.20
78.28
fi
82.12
81.83
83.10
73.75
73.73
74.73
fi ftb
85.14
84.70
86.20
74.03
73.45
74.54
fr
89.02
87.94
88.82
80.75
79.87
80.70
fr partut
80.61
78.81
82.42
77.38
77.62
78.08
fr sequoia
86.66
86.66
86.60
79.98
80.00
80.29
ga
71.09
70.49
72.75
61.52
62.37
62.62
gl
80.55
81.16
82.22
77.31
77.82
78.71
gl treegal
74.48
75.46
75.13
65.82
65.06
65.30
got
76.51
77.32
77.86
59.81
60.16
60.80
grc
61.65
62.80
65.51
56.04
54.83
55.66
grc proiel
75.72
74.58
76.78
65.22
64.80
66.79
he
83.18
81.94
82.87
57.23
55.13
55.07
hi
91.07
91.72
92.15
86.77
86.02
86.46
hr
80.76
79.46
81.17
77.18
76.35
77.59
hu
73.98
75.42
75.36
64.30
64.23
64.01
id
78.43
78.24
79.15
74.61
74.41
75.31
it
88.44
87.27
88.89
85.28
84.47
85.20
it partut a
85.16
84.20
83.85
-
-
-
ja
95.48
95.28
95.23
72.21
72.68
72.69
kk
34.83
37.08
22.47
24.51
25.14
22.77
ko
62.06
79.28
80.10
59.09
73.52
74.38
la
60.04
61.44
63.11
43.77
43.78
46.51
la ittb
77.91
77.01
79.98
76.98
75.78
76.67
la proiel
74.36
72.48
75.06
57.54
57.11
58.28
lv
72.71
72.58
73.37
59.95
58.65
60.13
nl
82.43
81.85
83.51
68.90
68.02
68.93
nl lassysmall
80.34
79.22
80.61
78.15
76.15
78.86
no bokmaal
88.78
87.54
88.70
83.27
81.61
82.71
no nynorsk
87.99
87.56
88.04
81.56
80.51
80.94
pl
87.35
87.79
86.66
78.78
78.99
78.65
pt
89.45
92.10
92.59
82.11
78.79
78.91
pt br
89.57
88.97
89.58
85.36
84.91
85.35
ro
82.25
81.80
82.59
79.88
78.93
80.07
ru
80.84
81.79
82.53
74.03
74.79
75.36
ru syntagrus
89.63
88.10
89.38
86.76
85.55
86.54
sk
83.83
82.95
84.13
72.75
72.14
73.66
sl
89.15
89.18
90.05
81.15
80.06
81.18
sl sst
67.31
66.81
68.09
46.45
46.05
46.50
sv
80.40
78.94
80.91
76.73
76.11
76.32
sv lines
81.38
81.07
81.73
74.29
73.62
74.07
tr
60.27
59.45
61.48
53.19
54.50
55.50
ug
53.85
58.65
49.04
34.18
36.26
35.62
uk
69.30
70.61
70.68
60.76
60.91
61.13
ur
81.62
85.47
85.68
76.69
76.36
76.98
vi
66.22
68.13
69.27
37.47
37.85
38.10
zh
79.37
77.45
78.21
57.40
56.18
56.22
avg.
79.52
79.65
80.18
70.34 b
70.06
70.79
a
Att. 12 / 14 14 / 14 6 / 14 13 / 14Gold tags
Pred tags
obl
nmod
obl
nmod
UDPipe 0 / 14 14 / 14 0 / 14 13 / 14
w/o Att. 0 / 14 14 / 14 0 / 14 13 / 14
w/
Table 3 :
3# of correct analysis for obl vs. nmod pairs.
As described in Section 3.1 we slightly extend their parser to use additional features. In this section, we first present our model with the original features for simplicity.
There are three systems(Straka and Straková, 2017;Kanerva et al., 2017;Yu et al., 2017) that outperform UDPipe v1.1 but the improvements come not from parsing models but from preprocessing, such as improvements to the POS tagger.3 https://lindat.mff.cuni.cz/repository/xmlui/handle/11234/1-1990
All items are shown in Appendix A.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate.
Incremental Spoken Dialogue Processing: Architecture and Lower-level Components. Timo Baumann, Ph.D. thesisTimo Baumann. 2013. Incremental Spoken Dialogue Processing: Architecture and Lower-level Components. Ph.D. thesis.
A fast and accurate dependency parser using neural networks. Danqi Chen, Christopher Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsDanqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural networks. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar, October. Association for Computational Linguistics.
Transition-based dependency parsing with stack long short-term memory. Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, Noah A Smith, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 334-343, Beijing, China, July. Association for Computational Linguistics.
Turkunlp: Delexicalized pre-training of word embeddings for dependency parsing. Jenna Kanerva, Juhani Luotolahti, Filip Ginter, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaAssociation for Computational LinguisticsJenna Kanerva, Juhani Luotolahti, and Filip Ginter. 2017. Turkunlp: Delexicalized pre-training of word embed- dings for dependency parsing. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 119-125, Vancouver, Canada, August. Association for Computational Linguistics.
Simple and accurate dependency parsing using bidirectional lstm feature representations. Eliyahu Kiperwasser, Yoav Goldberg, Transactions of the Association for Computational Linguistics. 4Eliyahu Kiperwasser and Yoav Goldberg. 2016. Simple and accurate dependency parsing using bidirectional lstm feature representations. Transactions of the Association for Computational Linguistics, 4:313-327.
Expectation-based syntactic comprehension. Roger Levy, Cognition. 106311261177Roger Levy. 2008. Expectation-based syntactic comprehension. Cognition, 106(3):11261177.
Finding function in form: Compositional character models for open vocabulary word representation. Wang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, Tiago Luis, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsWang Ling, Chris Dyer, Alan W Black, Isabel Trancoso, Ramon Fermandez, Silvio Amir, Luis Marujo, and Tiago Luis. 2015. Finding function in form: Compositional character models for open vocabulary word represen- tation. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1520-1530, Lisbon, Portugal, September. Association for Computational Linguistics.
Effective approaches to attention-based neural machine translation. Minh-Thang Luong, Hieu Pham, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Lisbon, PortugalAssociation for Computational LinguisticsMinh-Thang Luong, Hieu Pham, and Christopher D. Manning. 2015. Effective approaches to attention-based neural machine translation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1412- 1421, Lisbon, Portugal, September. Association for Computational Linguistics.
An improved oracle for dependency parsing with online reordering. Joakim Nivre, Marco Kuhlmann, Johan Hall, Proceedings of the 11th International Conference on Parsing Technologies (IWPT). the 11th International Conference on Parsing Technologies (IWPT)Joakim Nivre, Marco Kuhlmann, and Johan Hall. 2009. An improved oracle for dependency parsing with online reordering. In Proceedings of the 11th International Conference on Parsing Technologies (IWPT), pages 73-76.
Joakim Nivre, Željko Agić, Lars Ahrenberg, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Cristina Bosco, Gosse Bouma, Sam Bowman, Marie Candito, Cebiroglu Gülşen, Giuseppe G A Eryigit, Fabricio Celano, Jinho Chalub, Choi, Miriam Agrı Çöltekin, Elizabeth Connor, Marie-Catherine Davidson, Valeria De Marneffe, Arantza De Paiva, Kaja Diaz De Ilarraza, Timothy Dobrovoljc, Kira Dozat, Puneet Droganova, Marhaba Dwivedi, Tomaž Eli, Richárd Erjavec, Jennifer Farkas, Cláudia Foster, Katarína Freitas, Daniel Gajdošová, Marcos Galbraith, Filip Garcia, Iakes Ginter, Koldo Goenaga, Memduh Gojenola, Yoav Gökırmak, Xavier Gómez Goldberg, Berta Gonzáles Guinovart, Matias Saavedra, Normunds Grioni, Bruno Grūzītis, Nizar Guillaume, Jan Habash, Linh Hà Hajič, Dag Mỹ, Barbora Haug, Petter Hladká, Radu Hohle, Elena Ion, Anders Irimia, Fredrik Johannsen, Hüner Jørgensen, Hiroshi Kaşıkara, Jenna Kanayama, Natalia Kanerva, Simon Kotsyba, Veronika Krek, Phng Laippala, Alessandro Lê H`ông, Nikola Lenci, Ljubešić, Faculty of Mathematics and Physics. Lng Nguy˜ên Thi . , Huy`ên Nguy˜ên Thi . Minh, Vitaly Nikolaev, Hanna Nurmi, Stina Ojala, Petya Osenova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Barbara Plank, Martin Popel, Lauma Pretkalniņa, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexandre Rademaker, Loganathan Ramasamy, Livy Real, Laura Rituma, Rudolf Rosa, Shadi Saleh, Manuela Sanguinetti, Baiba Saulīte, Sebastian Schuster, Djamé Seddah, Wolfgang SeekerOlga Lyashevskaya, Teresa Lynn, Aibek Makazhanov, Christopher Manning, Cȃtȃlina Mȃrȃnduc, David Mareček, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Anna Missilä, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Shunsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Nina Mustafina, Kaili Müürisep; Lena Shakurova, Mo Shen, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, MáriaŠimková, Kiril Simov, Aaron Smith, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Takaaki Tanaka, Reut Tsarfaty, Francis Tyers; Jonathan North Washington, ZdeněkŽabokrtský, Amir Zeldes, Daniel ZemanSumire Uematsu, Larraitz Uria, Gertjan van Noord, Viktor Varga, Veronika Vincze ; Charles Universityand Hanzhi Zhu. 2017. Universal dependencies 2.0. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFALJoakim Nivre,Željko Agić, Lars Ahrenberg, Maria Jesus Aranzabe, Masayuki Asahara, Aitziber Atutxa, Miguel Ballesteros, John Bauer, Kepa Bengoetxea, Riyaz Ahmad Bhat, Eckhard Bick, Cristina Bosco, Gosse Bouma, Sam Bowman, Marie Candito, Gülşen Cebiroglu Eryigit, Giuseppe G. A. Celano, Fabricio Chalub, Jinho Choi, Ç agrı Çöltekin, Miriam Connor, Elizabeth Davidson, Marie-Catherine de Marneffe, Valeria de Paiva, Arantza Diaz de Ilarraza, Kaja Dobrovoljc, Timothy Dozat, Kira Droganova, Puneet Dwivedi, Marhaba Eli, Tomaž Erjavec, Richárd Farkas, Jennifer Foster, Cláudia Freitas, Katarína Gajdošová, Daniel Galbraith, Marcos Gar- cia, Filip Ginter, Iakes Goenaga, Koldo Gojenola, Memduh Gökırmak, Yoav Goldberg, Xavier Gómez Guino- vart, Berta Gonzáles Saavedra, Matias Grioni, Normunds Grūzītis, Bruno Guillaume, Nizar Habash, Jan Hajič, Linh Hà Mỹ, Dag Haug, Barbora Hladká, Petter Hohle, Radu Ion, Elena Irimia, Anders Johannsen, Fredrik Jørgensen, Hüner Kaşıkara, Hiroshi Kanayama, Jenna Kanerva, Natalia Kotsyba, Simon Krek, Veronika Laip- pala, Phng Lê H`ông, Alessandro Lenci, Nikola Ljubešić, Olga Lyashevskaya, Teresa Lynn, Aibek Makazhanov, Christopher Manning, Cȃtȃlina Mȃrȃnduc, David Mareček, Héctor Martínez Alonso, André Martins, Jan Mašek, Yuji Matsumoto, Ryan McDonald, Anna Missilä, Verginica Mititelu, Yusuke Miyao, Simonetta Montemagni, Amir More, Shunsuke Mori, Bohdan Moskalevskyi, Kadri Muischnek, Nina Mustafina, Kaili Müürisep, Lng Nguy˜ên Thi . , Huy`ên Nguy˜ên Thi . Minh, Vitaly Nikolaev, Hanna Nurmi, Stina Ojala, Petya Osenova, Lilja Øvrelid, Elena Pascual, Marco Passarotti, Cenel-Augusto Perez, Guy Perrier, Slav Petrov, Jussi Piitulainen, Bar- bara Plank, Martin Popel, Lauma Pretkalniņa, Prokopis Prokopidis, Tiina Puolakainen, Sampo Pyysalo, Alexan- dre Rademaker, Loganathan Ramasamy, Livy Real, Laura Rituma, Rudolf Rosa, Shadi Saleh, Manuela San- guinetti, Baiba Saulīte, Sebastian Schuster, Djamé Seddah, Wolfgang Seeker, Mojgan Seraji, Lena Shakurova, Mo Shen, Dmitry Sichinava, Natalia Silveira, Maria Simi, Radu Simionescu, Katalin Simkó, MáriaŠimková, Kiril Simov, Aaron Smith, Alane Suhr, Umut Sulubacak, Zsolt Szántó, Dima Taji, Takaaki Tanaka, Reut Tsarfaty, Francis Tyers, Sumire Uematsu, Larraitz Uria, Gertjan van Noord, Viktor Varga, Veronika Vincze, Jonathan North Washington, ZdeněkŽabokrtský, Amir Zeldes, Daniel Zeman, and Hanzhi Zhu. 2017. Uni- versal dependencies 2.0. LINDAT/CLARIN digital library at the Institute of Formal and Applied Linguistics (ÚFAL), Faculty of Mathematics and Physics, Charles University.
Algorithms for deterministic incremental dependency parsing. Joakim Nivre, Computational Linguistics. 344Joakim Nivre. 2008. Algorithms for deterministic incremental dependency parsing. Computational Linguistics, 34(4):513-554.
A neural attention model for abstractive sentence summarization. Alexander M Rush, Sumit Chopra, Jason Weston, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsAlexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sentence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 379-389, Lisbon, Portugal, September. Association for Computational Linguistics.
Neural responding machine for short-text conversation. Lifeng Shang, Zhengdong Lu, Hang Li, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaAssociation for Computational Linguistics1Lifeng Shang, Zhengdong Lu, and Hang Li. 2015. Neural responding machine for short-text conversation. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th Interna- tional Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 1577-1586, Beijing, China, July. Association for Computational Linguistics.
Foil it! find one mismatch between image and language caption. Ravi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurélie Herbelot, Moin Nabi, Enver Sangineto, Raffaella Bernardi, Proceedings of the 55th. the 55thRavi Shekhar, Sandro Pezzelle, Yauhen Klimovich, Aurélie Herbelot, Moin Nabi, Enver Sangineto, and Raffaella Bernardi. 2017. Foil it! find one mismatch between image and language caption. In Proceedings of the 55th
Annual Meeting of the Association for Computational Linguistics. Vancouver, CanadaAssociation for Computational Linguistics1Long Papers)Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 255-265, Vancouver, Canada, July. Association for Computational Linguistics.
Fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set. Tianze Shi, Liang Huang, Lillian Lee, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsTianze Shi, Liang Huang, and Lillian Lee. 2017. Fast(er) exact decoding and global training for transition-based dependency parsing via a minimal feature set. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 12-23, Copenhagen, Denmark, September. Association for Computational Linguistics.
Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. Milan Straka, Jana Straková, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaAssociation for Computational LinguisticsMilan Straka and Jana Straková. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 88-99, Vancouver, Canada, August. Association for Computational Linguistics.
Udpipe: Trainable pipeline for processing conll-u files performing tokenization, morphological analysis, pos tagging and parsing. Milan Straka, Jan Hajic, Jana Strakov, ; , Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016). Odijk, and Stelios Piperidisthe Tenth International Conference on Language Resources and Evaluation (LREC 2016)Helene Mazo, Asuncion Moreno; Paris, France, mayNicoletta Calzolari (Conference Chair). European Language Resources Association (ELRAMilan Straka, Jan Hajic, and Jana Strakov. 2016. Udpipe: Trainable pipeline for processing conll-u files perform- ing tokenization, morphological analysis, pos tagging and parsing. In Nicoletta Calzolari (Conference Chair), Khalid Choukri, Thierry Declerck, Sara Goggi, Marko Grobelnik, Bente Maegaard, Joseph Mariani, Helene Mazo, Asuncion Moreno, Jan Odijk, and Stelios Piperidis, editors, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC 2016), Paris, France, may. European Language Resources Association (ELRA).
Grammar as a foreign language. Oriol Vinyals, Terry Kaiser, Slav Koo, Ilya Petrov, Geoffrey Sutskever, Hinton, Advances in Neural Information Processing Systems. C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. GarnettCurran Associates, Inc28Oriol Vinyals, Ł ukasz Kaiser, Terry Koo, Slav Petrov, Ilya Sutskever, and Geoffrey Hinton. 2015. Grammar as a foreign language. In C. Cortes, N. D. Lawrence, D. D. Lee, M. Sugiyama, and R. Garnett, editors, Advances in Neural Information Processing Systems 28, pages 2773-2781. Curran Associates, Inc.
Show, attend and tell: Neural image caption generation with visual attention. Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, Yoshua Bengio, Kelvin Xu, Jimmy Ba, Ryan Kiros, Kyunghyun Cho, Aaron Courville, Ruslan Salakhutdinov, Richard Zemel, and Yoshua Bengio. 2015. Show, attend and tell: Neural image caption generation with visual attention.
The parse is darc and full of errors: Universal dependency parsing with transition-based and graph-based algorithms. Kuan Yu, Pavel Sofroniev, Erik Schill, Erhard Hinrichs, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaAssociation for Computational LinguisticsKuan Yu, Pavel Sofroniev, Erik Schill, and Erhard Hinrichs. 2017. The parse is darc and full of errors: Uni- versal dependency parsing with transition-based and graph-based algorithms. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 126-133, Vancouver, Canada, August. Association for Computational Linguistics.
Daniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic Jr, Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Herman Leung. Héctor Martínez Alonso, Ç agrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fernandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli ManurungMarie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de-Paiva, Kira Droganova; Antonio Stella, Atsuko Shimada, SookyoungDaniel Zeman, Martin Popel, Milan Straka, Jan Hajic, Joakim Nivre, Filip Ginter, Juhani Luotolahti, Sampo Pyysalo, Slav Petrov, Martin Potthast, Francis Tyers, Elena Badmaeva, Memduh Gokirmak, Anna Nedoluzhko, Silvie Cinkova, Jan Hajic jr., Jaroslava Hlavacova, Václava Kettnerová, Zdenka Uresova, Jenna Kanerva, Stina Ojala, Anna Missilä, Christopher D. Manning, Sebastian Schuster, Siva Reddy, Dima Taji, Nizar Habash, Her- man Leung, Marie-Catherine de Marneffe, Manuela Sanguinetti, Maria Simi, Hiroshi Kanayama, Valeria de- Paiva, Kira Droganova, Héctor Martínez Alonso, Ç agrı Çöltekin, Umut Sulubacak, Hans Uszkoreit, Vivien Macketanz, Aljoscha Burchardt, Kim Harris, Katrin Marheinecke, Georg Rehm, Tolga Kayadelen, Mohammed Attia, Ali Elkahky, Zhuoran Yu, Emily Pitler, Saran Lertpradit, Michael Mandl, Jesse Kirchner, Hector Fer- nandez Alcalde, Jana Strnadová, Esha Banerjee, Ruli Manurung, Antonio Stella, Atsuko Shimada, Sookyoung
Conll 2017 shared task: Multilingual parsing from raw text to universal dependencies. Gustavo Kwak, Tatiana Mendonca, Rattima Lando, Josie Nitisaroj, Li, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesAssociation for Computational LinguisticsKwak, Gustavo Mendonca, Tatiana Lando, Rattima Nitisaroj, and Josie Li. 2017. Conll 2017 shared task: Multilingual parsing from raw text to universal dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 1-19. Association for Computational Linguistics. |
236,486,279 | Agenda Pushing in Email to Thwart Phishing | In this work, we draw parallels in automatically responding to emails for combating social-engineering attacks and documentgrounded response generation. We lay out the blueprint of our approach and illustrate our reasoning. E-mails are longer than dialogue utterances and often contain multiple intents. To respond to phishing emails, we need to make decisions similar to those for documentgrounded responses-deciding what parts of long text to use and how to address each intent to generate a knowledgeable multi-component response that pushes scammers towards agendas. We propose Puppeteer as a promising solution to this end: a hybrid system that uses customizable probabilistic finite state transducers to orchestrate pushing agendas coupled with neural dialogue systems that generate responses to unexpected prompts. We emphasize the need for this system by highlighting each component's strengths and weaknesses and show how they complement each other. | [
216036372,
52967399
] | Agenda Pushing in Email to Thwart Phishing
August 5-6, 2021
Hyundong Cho jcho@isi.edu
Information Sciences Institute
University of Southern California
Genevieve Bartlett bartlett@isi.edu
Information Sciences Institute
University of Southern California
Marjorie Freedman
Information Sciences Institute
University of Southern California
Agenda Pushing in Email to Thwart Phishing
Proceedings of the 1st Workshop on Document-grounded Dialogue and Conversational Question Answering
the 1st Workshop on Document-grounded Dialogue and Conversational Question AnsweringAugust 5-6, 2021113
In this work, we draw parallels in automatically responding to emails for combating social-engineering attacks and documentgrounded response generation. We lay out the blueprint of our approach and illustrate our reasoning. E-mails are longer than dialogue utterances and often contain multiple intents. To respond to phishing emails, we need to make decisions similar to those for documentgrounded responses-deciding what parts of long text to use and how to address each intent to generate a knowledgeable multi-component response that pushes scammers towards agendas. We propose Puppeteer as a promising solution to this end: a hybrid system that uses customizable probabilistic finite state transducers to orchestrate pushing agendas coupled with neural dialogue systems that generate responses to unexpected prompts. We emphasize the need for this system by highlighting each component's strengths and weaknesses and show how they complement each other.
Introduction
The Anti-Phishing Working Group observed a doubling of phishing attacks over 2020 with business e-mail compromise scams costing an average of 75, 000 per incident (APWG, 2021). Scammers use these attacks to reach a wide audience of victims and perform targeted attacks on high-value targets. Even when not fully successful, these attacks waste victims' time and resources.
To fight back against scammers, individualscolloquially called scambaiters-have demonstrated that careful engagement with scammers can waste a scammer's time, thus reducing resources for new attacks. Engaging with scammers through dialogue in the form of email also opens up opportunities to push scammers towards actions beneficial for defense and attribution, such as getting scammers to visit a specialized honeypot or divulging information. This information can aid in identifying coordinated, large-scale attack campaigns and help with attack attribution. In this paper we introduce a framework for automating dialogue engagement with scammers and pushing agendas to get scammers to take actions.
Eliciting information from scammers and continuing an email sequence to waste their time presents challenges not addressed by existing dialogue systems. Specifically, this area of automated dialogue is challenging because: 1) email conversations are significantly different from chit-chat conversations: each turn is longer and thus usually contains more information that needs to be incorporated into the response and has multiple intents/requests in a single turn that should be addressed 2) the initial dialogue topics can range greatly and change quickly and a bot must respond appropriately to new topics, goals and questions from the scammer to appear human 3) there is a high cost associated with the scammer recognizing the dialogue is automated as any work put in for trust building is lost if the attacker suspects he/she is talking to a bot and 4) the scammer's agenda is independent of the bot's agenda-thus the bot needs to maintain awareness of its own goals without ignoring the competing goals of the scammer.
Using "canned" responses chosen by following a pre-written script, or performing deep-learning over expected conversation flows for eliciting information are reasonable approaches to address the challenges of keeping responses targeted, topical and persuasive without a lapse in coherency in dialogue. However, such approaches will not meet the second challenge of being robust enough to respond to open dialogue and unexpected scamming intents in a topical and directed manner.
In this paper, we introduce our approach to address all challenges with a modular hybrid dialogue system, Puppeteer. Puppeteer uses multiple Fi-nite State Transducers (FSTs) to push and track multiple agendas in uncooperative dialogue and combines this with a neural dialogue system to keep conversation topics free-flowing and natural sounding while effectively incorporating information provided from the incoming email. We discuss our progress in building our approach and have released our framework for public use 1 .
The Puppeteer Framework
Eliciting information from SE attackers introduces a niche but important problem space that requires a specialized dialogue system to address the distinct trade-offs and risks involved in engaging with scammers for the purpose of pushing the scammer into certain actions. In this section, we introduce our dialogue framework Puppeteer and discuss how our framework deals with open-ended dialogue, while inserting and tracking progress towards specific desired actions.
First, to carry out and track progress towards specific actions, Puppeteer uses probabilistic finite state transducers (FSTs). The FST approach enables a task-oriented framework for belief tracking and context-specific natural language understanding, which both keep the conversation moving towards specific goals and bolsters accurate interpretation of any extracted information.
Dialogue based on FSTs, however, can be inflexible and brittle in the face of open-ended conversations. An FST-based dialogue approach is not, on its own, appropriate for SMS, social media, and email conversations if the goal is to keep the conversation going without revealing the responder is a bot. To address this, the Puppeteer framework combines its FST approach with deep learning and neural generative approaches. Dialogue generated through the use of pre-trained models is folded in with responses prescribed by any active FSTs in a conversation. The goal in this hybrid approach is to "script" the persuasive dialogue designed to push agendas, while incorporating a more open-ended neural dialogue system to keep the scammer engaged. An illustrative example of this ensemble is shown in Figure 1.
Pushing Agendas with FSTs A Puppeteer agenda is defined by the states and transitions of an FST as well as the cues which indicate that a transition should be taken. The FST for an agenda captures the different pathways a conversation can 1 https://github.com/STEELISI/Puppeteer go when requesting a specific action and responding to possible push-back against requests. At each turn in the conversation, the incoming message is evaluated for all cues in all active agenda FSTs. Additionally, the message is evaluated for a "nonevent" for each agenda-the probability that the incoming message does not contain any cues for a particular agenda.
Each cue has a cue detector which recognizes when an indicator was found, and provides a confidence value for that decision. These confidence values are then combined with the non-event probability for an agenda and normalized. This normalization must support comparison between different cue detector confidence values and therefore is specific to the set of detectors used for an agenda. For each agenda's FST, Puppeteer tracks the probability distribution across all possible states in the FST as the conversation progresses, retiring agendas as they stall out or complete and adding new agendas based on policy rules dictating when and how to kick off agendas.
Determining when an agenda is complete is also based on thresholding. Ultimately, when the system reaches a high enough confidence the conversation has transitioned an agenda's FST to a terminus state, the agenda is considered complete. By default, Puppeteer does not use fixed thresholds for determining confidence for completion, but instead uses relative probabilities between states and configurable thresholds. This is because longer conversations tend to disperse total probability throughout all states over time. For agendas which are expected to complete over fewer turns, this default can be overridden.
We anticipate a wide range of agendas may be needed. The Puppeteer framework is written in Python and designed to be modular, enabling the easy addition of new agendas (backed by FSTs) and allowing for modular incorporation of nearly any natural language understanding approaches in cue detection. Additionally, defining response actions is extensible to enable differing approaches for response generation. To define a Puppeteer agenda, a user describes the state machine and any custom policy and thresholds in a YAML file. Default behaviors can be easily customized by overriding the appropriate delegator mixin class.
Currently, cue detectors are managed by Snips NLU (Coucke et al., 2018). For each transition cue, the user supplies a file of example sentences Figure 1: An example of a response generated by a neural dialogue system folded into the script indicatored by the FST to pursue an information collection agenda. Each component complements one another to generate an effective response for eliciting the attacker's information.
or phrases which indicate a transition should be taken, and optionally a file of examples for negative indicators. For example, if an cue detector is looking for text that someone lives in a location, a positive example would be "I live in New York" and a negative example would be "I want to visit New York". These negative examples help filter out false positives. These files are used to create a Snips engine which gives confidence scores on found intents in incoming messages. In practice, we have found most indicators need only 20-40 positive and negative example sentences each as cues are only employed in contexts likely to contain a small set of specific intents and need only to distinguish between "no intent" and the handful of intents in an active agenda. The framework is designed so Snips NLU can be replaced with another NLU approach. To do so, the user must supply a function to Puppeteer which takes in incoming message content and returns a confidence score a particular cue is found in the incoming messages.
Each agenda has a configurable number of associated actions with each state in its FST that can be kicked off any time the probability the conversation has reached policy thresholds for that state and threshold. The default action for all states is to pull a response from a template file, and users can provide additional functions and link these to states in their FST definition for an agenda. In use with our phishing defense system, most agendas have additional actions for states where the scammer has responded with information we pass to other functions of our phishing system such as the attribution module.
Neural Dialogue System: In our current im-plementation, the neural dialogue system can be chosen to be either a BERT-based question and answering system called Closed-Domain Question Answering (cdQA) or a fine-tuned GPT-2 model. cdQA offers indirect functionality as a dialogue system by retrieving relevant segments of text to a given query. As its name suggests, it is actually closed-domain in the sense that it only retrieves answers from a given set of source documents, but the source documents can be expanded to accommodate a variety of domains.
Our GPT-2 model is SpolinBot, which can be used as a stand-alone dialogue system. SpolinBot is first fine-tuned with Personachat (Zhang et al., 2018) to adapt to the dialogue domain and then further tuned with SPOLIN to ground its response to the incoming email by learning how to incorporate the "Yes, and" principle of improvisational theatre (Cho and May, 2020). We use training details outlined by Wolf et al. (2019).
Importance of a Hybrid Approach The importance of correctly integrating the components becomes evident by observing the shortcomings of each component when used in isolation. Figure 2 demonstrates components in isolation. The FST approach is stilted in pushing an agenda as it is limited to responses for agendas deemed relevant to the conversation which does not directly address questions. The neural dialogue systems cannot push an agenda, but respond to the prompt.
In contrast, Figure 1 demonstrates the strengths of each component when they are ideally combined together to generate an effective response.
Putting them together: For each paragraph from the email other than the header and the signa- ture, Puppeteer currently consults the cdQA component for questions and the yes-and bot for nonquestion text and text which has no indicators for any agenda. As shown in Figure 1, the responses from the neural dialogue component and the Puppeteer agendas are naively appended in order of the parts of the email that they respond to. However, it may often be the case that some parts of the email do not necessarily need a response. Improving how and when components are called on for responses and how these responses are combined is an ongoing effort. So far, empirical results show our current combining approach does relatively well on short prompts, but this analysis is particularly challenging due to the lack of automatic evaluation metrics for neural dialogue systems and the large variance of resulting models based on different training data.
Related Work
Social engineering (SE) is the act of getting users to compromise information systems. Contrary to technical attacks directly on network and computer systems, SE attacks target humans with access to information and manipulate these target users to divulge confidential information (Krombholz et al., 2015). Phishing is a specific type of social engineering attack in which targets are contacted through digital channels such as e-mail, SMS or social media to lure individuals into providing sensitive data such as personally identifiable information, system log in credentials or organization details (Hong, 2012). Our work focuses on generating dialogue to engage such scammers over one or more of these digital, text-based channels.
Most research efforts addressing SE look at detection (e.g. Basnet et al. (2008); Chen et al. (2014);Singh et al. (2015)) and defending against such attacks by dropping or otherwise terminating such attacks (e.g. Chaudhry et al. (2016); Gragg (2003);Chandra et al. (2015)). An anti-phishing project by Netsafe 2 picks a curated personality and uses automated email responses to waste the attacker's time as much as possible, but its not open-sourced and little is known about how it works. Our system is similar to Netsafe's project in that it is focused on actively engaging scammers through automated dialogue, but Puppeteer also pushes scammers towards actions favorable for attribution and defense. We rely on separate detection methods to identify messages and senders the Puppeteer dialogue system should engage.
Only recently have research efforts looked at using automated text-based dialogue to respond to scammers. Li et al. (2019) leverage intent and semantic labels in non-collaborative dialogue corpora to distinguish on-task and off-task dialogue and therefore enhance human evaluation scores for engagement and coherence. We aim to achieve a similar objective with the additional goal of pushing a range of agendas and responding appropriately and topically over a broad range of open dialogue. Hobbyists and commercial developers also have looked at automatic responses to scammers. These efforts are interactive spoken-word approaches that detect silence in conversation and interject prerecorded non sequiturs to waste a scam caller's time (Oberhaus, 2018;TelTech, 2020). While one of the goals of our work is to waste scammer time, Puppeteer performs natural language understanding to engage scammers at a deeper level and push agendas with the ultimate goal of pushing scammers into actions which aid attribution.
Our hybrid system is inspired by a large body of existing work in dialogue systems. Hudson and Newell (1992) propose probabilistic FSTs for managing dialogue under uncertainty, while many dialogue systems incorporate FSTs for management functionality in spoken dialogue systems (Pietquin and Dutoit, 2003;Chu et al., 2005;Sonntag, 2006;Hori et al., 2009). Recent interests in large pre-trained language models based on Transformers and open-domain question answering systems paved the way for our neural network approaches to be used as open-domain dialogue sys-tems, such as GPT-2 or DrQA (Vaswani et al., 2017;Devlin et al., 2019;Liu et al., 2019;Radford et al., 2018;Chen et al., 2017;Farias et al., 2019). The novelty of Puppeteer is in the combination of these two approaches to address the unique challenges of system-scammer dialogue.
Conclusion
In this paper we introduced email response generation for phishing as a challenging dialogue domain. Our approach draws on similarities with documentgrounded response generation. As a first step to address the challenges of automating phishing response, we proposed Puppeteer and made it publicly available. Puppeteer's modular architecture makes it easy to augment or replace its components to tackle individual challenges. These components complement one another in generating suitable responses for engaging scammers and inserting agendas, but it remains an open problem to seamlessly combine response components into a composed email response.
This material is based on research sponsored by the AFRL and DARPA under agreement number FA8650-18-C-7878. The views and conclusions contained herein are those of the authors and should not be interpreted as necessarily representing the official policies or endorsements, either expressed or implied, of the AFRL, DARPA, or the U.S. Government.
YESAND: Yes, I've been looking for one. NDS-QNA: I can see what times work.
Figure 2 :
2Examples that highlight the weaknesses of individual components. The FST approach is stilted in pushing an agenda as it does not address the question posed by the scammer. The neural dialogue system (NDS) fails to respond to specific tasks.
https://rescam.org
APWG. 2021. Phishing activity trends report 4th quarter 2020. APWG. 2021. Phishing activity trends report 4th quar- ter 2020.
Detection of phishing attacks: A machine learning approach. Ram Basnet, Srinivas Mukkamala, Andrew H Sung, Soft Computing Applications in Industry. SpringerRam Basnet, Srinivas Mukkamala, and Andrew H Sung. 2008. Detection of phishing attacks: A ma- chine learning approach. In Soft Computing Appli- cations in Industry, pages 373-383. Springer.
Intelligence based defense system to protect from advanced persistent threat by means of social engineering on social cloud platform. Narasimham Vijaya Chandra, Sai Kiran Challa, Pasupuleti, Indian Journal of Science and Technology. 8281J Vijaya Chandra, Narasimham Challa, and Sai Kiran Pasupuleti. 2015. Intelligence based defense system to protect from advanced persistent threat by means of social engineering on social cloud platform. In- dian Journal of Science and Technology, 8(28):1.
Phishing attacks and defenses. Shafique Ahmad Junaid Ahsenali Chaudhry, Robert G Chaudhry, Rittenhouse, International Journal of Security and Its Applications. 101Junaid Ahsenali Chaudhry, Shafique Ahmad Chaudhry, and Robert G Rittenhouse. 2016. Phishing attacks and defenses. International Journal of Security and Its Applications, 10(1):247-256.
Reading wikipedia to answer open-domain questions. Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes, arXiv:1704.00051arXiv preprintDanqi Chen, Adam Fisch, Jason Weston, and An- toine Bordes. 2017. Reading wikipedia to an- swer open-domain questions. arXiv preprint arXiv:1704.00051.
Detect phishing by checking content consistency. Yi-Shin Chen, Yi-Hsuan Yu, Huei-Sin Liu, Pang-Chieh Wang, Proceedings of the 2014 IEEE 15th International Conference on Information Reuse and Integration (IEEE IRI 2014). the 2014 IEEE 15th International Conference on Information Reuse and Integration (IEEE IRI 2014)IEEEYi-Shin Chen, Yi-Hsuan Yu, Huei-Sin Liu, and Pang- Chieh Wang. 2014. Detect phishing by checking content consistency. In Proceedings of the 2014 IEEE 15th International Conference on Information Reuse and Integration (IEEE IRI 2014), pages 109- 119. IEEE.
Grounding conversations with improvised dialogues. Hyundong Cho, Jonathan , 10.18653/v1/2020.acl-main.218Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsHyundong Cho and Jonathan May. 2020. Grounding conversations with improvised dialogues. In Pro- ceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 2398- 2413, Online. Association for Computational Lin- guistics.
An approach to multi-strategy dialogue management. Shiu-Wah, Ian O' Chu, Philip Neill, Michael Hanna, Mctear, Ninth European Conference on Speech Communication and Technology. Shiu-Wah Chu, Ian O'Neill, Philip Hanna, and Michael McTear. 2005. An approach to multi-strategy dia- logue management. In Ninth European Conference on Speech Communication and Technology.
Snips voice platform: an embedded spoken language understanding system for private-by-design voice interfaces. Alice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Caltagirone, Thibaut Lavril, arXiv:1805.10190arXiv preprintAlice Coucke, Alaa Saade, Adrien Ball, Théodore Bluche, Alexandre Caulier, David Leroy, Clément Doumouro, Thibault Gisselbrecht, Francesco Calta- girone, Thibaut Lavril, et al. 2018. Snips voice plat- form: an embedded spoken language understanding system for private-by-design voice interfaces. arXiv preprint arXiv:1805.10190, pages 12-16.
Bert: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, NAACL-HLT. Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. Bert: Pre-training of deep bidirectional transformers for language understand- ing. In NAACL-HLT.
Matyas Amrouche, Théo Nazon, and Olivier Sans. 2019. cdqa: Closed domain question answering. André Farias, Félix Mikaeliean, André Farias, Félix Mikaeliean, Matyas Amrouche, Théo Nazon, and Olivier Sans. 2019. cdqa: Closed domain question answering. https://github. com/cdqa-suite/cdQA.
A multi-level defense against social engineering. SANS Reading Room. David Gragg, 13David Gragg. 2003. A multi-level defense against so- cial engineering. SANS Reading Room, 13.
The state of phishing attacks. Commun. Jason Hong, 10.1145/2063176.2063197ACM55Jason Hong. 2012. The state of phishing attacks. Com- mun. ACM, 55(1):74-81.
Statistical dialog management applied to wfst-based dialog systems. Chiori Hori, Kiyonori Ohtake, Teruhisa Misu, Hideki Kashioka, Satoshi Nakamura, 2009 IEEE International Conference on Acoustics, Speech and Signal Processing. IEEEChiori Hori, Kiyonori Ohtake, Teruhisa Misu, Hideki Kashioka, and Satoshi Nakamura. 2009. Statisti- cal dialog management applied to wfst-based dialog systems. In 2009 IEEE International Conference on Acoustics, Speech and Signal Processing, pages 4793-4796. IEEE.
Probabilistic state machines: dialog management for inputs with uncertainty. E Scott, Gary L Hudson, Newell, Proceedings of the 5th annual ACM symposium on User interface software and technology. the 5th annual ACM symposium on User interface software and technologyACMScott E Hudson and Gary L Newell. 1992. Proba- bilistic state machines: dialog management for in- puts with uncertainty. In Proceedings of the 5th annual ACM symposium on User interface software and technology, pages 199-208. ACM.
Advanced social engineering attacks. Katharina Krombholz, Heidelinde Hobel, Markus Huber, Edgar Weippl, Journal of Information Security and applications. 22Katharina Krombholz, Heidelinde Hobel, Markus Hu- ber, and Edgar Weippl. 2015. Advanced social en- gineering attacks. Journal of Information Security and applications, 22:113-122.
End-to-end trainable non-collaborative dialog system. Yu Li, Kun Qian, Weiyan Shi, Zhou Yu, arXiv:1911.10742arXiv preprintYu Li, Kun Qian, Weiyan Shi, and Zhou Yu. 2019. End-to-end trainable non-collaborative dialog sys- tem. arXiv preprint arXiv:1911.10742.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
The story of lenny, the internet's favorite telemarketing troll. Daniel Oberhaus, Daniel Oberhaus. 2018. The story of lenny, the inter- net's favorite telemarketing troll.
Aided design of finite-state dialogue management systems. Olivier Pietquin, Thierry Dutoit, 2003 International Conference on Multimedia and Expo. ICME'03. Proceedings (Cat. No. 03TH8698). IEEE3545Olivier Pietquin and Thierry Dutoit. 2003. Aided design of finite-state dialogue management sys- tems. In 2003 International Conference on Multi- media and Expo. ICME'03. Proceedings (Cat. No. 03TH8698), volume 3, pages III-545. IEEE.
Improving language understanding by generative pre-training. Alec Radford, Karthik Narasimhan, Tim Salimans, Ilya Sutskever, Alec Radford, Karthik Narasimhan, Tim Salimans, and Ilya Sutskever. 2018. Improving language understanding by generative pre-training. URL https://s3-us-west-2. amazonaws. com/openai- assets/researchcovers/languageunsupervised/language understanding paper. pdf.
Phishing websites detection through supervised learning networks. Priyanka Singh, P S Yogendra, Sanjeev Maravi, Sharma, International Conference on Computing and Communications Technologies (ICCCT). IEEEPriyanka Singh, Yogendra PS Maravi, and Sanjeev Sharma. 2015. Phishing websites detection through supervised learning networks. In 2015 Interna- tional Conference on Computing and Communica- tions Technologies (ICCCT), pages 61-65. IEEE.
Towards combining finite-state, ontologies, and data driven approaches to dialogue management for multimodal question answering. Daniel Sonntag, Proceedings of the 5th Slovenian First International Language Technology Conference. the 5th Slovenian First International Language Technology ConferenceDaniel Sonntag. 2006. Towards combining finite-state, ontologies, and data driven approaches to dialogue management for multimodal question answering. In Proceedings of the 5th Slovenian First International Language Technology Conference (IS-LTC 2006).
Robokiller, the app that stops spam calls forever. Teltech, TelTech. 2020. Robokiller, the app that stops spam calls forever.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in neural information pro- cessing systems, pages 5998-6008.
Transfertransfo: A transfer learning approach for neural network based conversational agents. Thomas Wolf, Victor Sanh, Julien Chaumond, Clement Delangue, arXiv:1901.08149arXiv preprintThomas Wolf, Victor Sanh, Julien Chaumond, and Clement Delangue. 2019. Transfertransfo: A transfer learning approach for neural network based conversational agents. arXiv preprint arXiv:1901.08149.
Personalizing dialogue agents: I have a dog, do you have pets too?. Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, Jason Weston, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Saizheng Zhang, Emily Dinan, Jack Urbanek, Arthur Szlam, Douwe Kiela, and Jason Weston. 2018. Per- sonalizing dialogue agents: I have a dog, do you have pets too? In Proceedings of the 56th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 2204- 2213. |
7,396,295 | ICT: A Translation based Method for Cross-lingual Textual Entailment | In this paper, we present our system description in task of Cross-lingual Textual Entailment. The goal of this task is to detect entailment relations between two sentences written in different languages. To accomplish this goal, we first translate sentences written in foreign languages into English. Then, we use EDITS 1 , an open source package, to recognize entailment relations. Since EDITS only draws monodirectional relations while the task requires bidirectional prediction, thus we exchange the hypothesis and test to detect entailment in another direction. Experimental results show that our method achieves promising results but not perfect results compared to other participants. | [
384994,
7747235,
10202504,
8884845,
7080762,
5375922,
1747290,
10668091,
5030780,
12919101
] | ICT: A Translation based Method for Cross-lingual Textual Entailment
June 7-8, 2012
Fandong Meng mengfandong@ict.ac.cn
Key Lab. of Intelligent Information Processing Institute of Computing Technology Chinese Academy of Sciences
P.O. Box 2704100190BeijingChina
Hao Xiong xionghao@ict.ac.cn
Key Lab. of Intelligent Information Processing Institute of Computing Technology Chinese Academy of Sciences
P.O. Box 2704100190BeijingChina
Qun Liu liuqun@ict.ac.cn
Key Lab. of Intelligent Information Processing Institute of Computing Technology Chinese Academy of Sciences
P.O. Box 2704100190BeijingChina
ICT: A Translation based Method for Cross-lingual Textual Entailment
First Joint Conference on Lexical and Computational Semantics (*SEM)
Montréal, CanadaJune 7-8, 2012
In this paper, we present our system description in task of Cross-lingual Textual Entailment. The goal of this task is to detect entailment relations between two sentences written in different languages. To accomplish this goal, we first translate sentences written in foreign languages into English. Then, we use EDITS 1 , an open source package, to recognize entailment relations. Since EDITS only draws monodirectional relations while the task requires bidirectional prediction, thus we exchange the hypothesis and test to detect entailment in another direction. Experimental results show that our method achieves promising results but not perfect results compared to other participants.
Introduction
In Cross-Lingual Textual Entailment task (CLTE) of 2012, the organizers hold a task for Cross-Lingual Textual Entailment. The Cross-Lingual Textual Entailment task addresses textual entailment (TE) recognition under a new dimension (cross-linguality), and within a new challenging application scenario (content synchronization)
Readers can refer to M. Negri et al. 2012.s., for more detailed introduction. 1 Textual entailment, on the other hand, recognize, generate, or extract pairs of natural language expressions, and infer that if one element is true, whether the other element is also true. Several methods are proposed by previous researchers. There have been some workshops on textual entailment in recent years. The recognizing textual entailment challenges (Bar-Haim et al. 2006;Giampiccolo, Magnini, Dagan, & Dolan, 2007;Giampiccolo, Dang, Magnini, Dagan, & Dolan, 2008), currently in the 7th year, provide additional significant thrust. Consequently, there are a large number of published articles, proposed methods, and resources related to textual entailment. A special issue on textual entailment was also recently published, and its editorial provides a brief overview of textual entailment methods (Dagan, Dolan, Magnini, & Roth, 2009).
Textual entailment recognizers judge whether or not two given language expressions constitute a correct textual entailment pair. Different methods may operate at different levels of representation of the input expressions. For example, they may treat the input expressions simply as surface strings, they may operate on syntactic or semantic representations of the input expressions, or on representations combining information from different levels. Logic-based approach is to map the language expressions to logical meaning representations, and then rely on logical entailment checks, possibly by invoking theorem provers (Rinaldi et al., 2003;Bos & Markert, 2005;Tatu & Moldovan, 2005. An alternative to use logical meaning representations is to start by mapping each word of the input language expressions to a vector that shows how strongly the word co-occurs with particular other words in corpora (Lin, 1998b), possibly also taking into account syntactic information, for example requiring that the co-occurring words participate in particular syntactic dependencies (Pad´o & Lapata, 2007). Several textual entailment recognizing methods operate directly on the input surface strings. For example, they compute the string edit distance (Levenshtein, 1966) of the two input strings, the number of their common words, or combinations of several string similarity measures (Malakasiotis & Androutsopoulos, 2007). Dependency grammar parsers (Melcuk, 1987;Kubler, McDonald, & Nivre, 2009) are popular in textual entailment research. However, cross-lingual textual entailment brings some problems on past algorithms. On the other hand, many methods can't be applied to it directly.
In this paper, we propose a translation based method for cross-lingual textual entailment, which has been described in Mehdad et al. 2010. First, we translate one part of the text, which termed as "t1" and written in one language, into English, which termed as "t2". Then, we use EDITS, an open source package, to recognize entailment relations between two parts. Large-scale experiments are conducted on four language pairs, French-English, Spanish-English, Italian-English and German-English. Although our method achieves promising results reported by organizers, it is still far from perfect compared to other participants.
The remainder of this paper is organized as follows. We describe our system framework in section 2. We report experimental results in section 3 and draw our conclusions in the last section. Figure 1 illustrates the overall framework of our system, where a machine translation model is employed to translate foreign language into English, since original EDITS could only deal with the text in the same language pairs. In the following of this section, we will describe the translation module and configuration of EDITS in details. Figure 1: The framework of our system.
System Description
Machine Translation
Recently, machine translation has attracted intensive attention and has been well studied in natural language community. Effective models, such as Phrase-Based model (Koehn et al., 2003), Hierarchical Phrase-Based model (HPB) (Chiang, 2005), and Syntax-Based (Liu et al., 2006) model have been proposed to improve the translation quality. However, since current translation models require parallel corpus to extract translation rules, while parallel corpus on some language pairs such as Italian-English and Spanish-English are hard to obtain, therefore, we could use Google Translation Toolkit (GTT) to generate translation.
Specifically, WMT 2 released some bilingual corpus for training, thus we use some portion to train a French-English translation engine using hierarchical phrase-based model. We also exploit system combination technique (A Rosti et al., 2007) to improve translation quality via blending the translation of our models and GTT's. It is worth noting that GTT only gives 1-best translation, thus we duplicate 50 times to generate 50-best for system combination.
Textual Entailment
Many methods have been proposed to recognize textual entailment relations between two expressions written in the same language. Since edit distance algorithms are effective on this task, we choose this method. And we use popular toolkit, EDITS, to accomplish the textual entailment task.
EDITS is an open source software, which is used for recognizing entailment relations between two parts of text, termed as "T" and "H". The system is based on the edit distance algorithms, and computes the "T"-"H" distance as the cost of the edit operations (i.e. insertion, deletion and substitution) that are necessary to transform "T" into "H". EDITS requires that three modules are defined: an edit distance algorithm, a cost scheme for the three edit operations, and a set of rules expressing either entailment or contradiction. Each module can be easily configured by the user as well as the system parameters. EDITS can work at different levels of complexity, depending on the linguistic analysis carried on over "T" and "H". Both linguistic processors and semantic resources that are available to the user can be integrated within EDITS, resulting in a flexible, modular and extensible approach to textual entailment. Figure 2 shows an example of two expressions that EDITS can recognize. EDITS will give an answer that whether expression "H" is true given that expression "T" is true. The result is a Boolean value. If "H" is true given "T" is true, then the result is "YES", otherwise "NO".
EDITS implements a distance-based framework which assumes that the probability of an entailment relation between a given "T"-"H" pair is inversely proportional to the distance between "T" and "H" (i.e. the higher the distance, the lower is the probability of entailment). Within this framework the system implements and harmonizes different approaches to distance computation, providing both edit distance algorithms, and similarity algorithms. Each algorithm returns a normal-ized distance score (a number between 0 and 1). At a training stage, distance scores calculated over annotated "T"-"H" pairs are used to estimate a threshold that best separates positive from negative examples. The threshold, which is stored in a Model, is used at a test stage to assign an entailment judgment and a confidence score to each test pair. Figure 3 shows our configuration file for training models, we choose "distance" algorithm in EDITS, and "default_matcher", and "ignore_case" , and some other default but effective configured parameters. Figure 4: The overall training and decoding procedure in our system. Figure 4 shows our training and decoding procedure. As EDITS can only recognize textual entailment from one part to the other, we manually change the tag "H" with "T", and generate the results again, and then compute two parts' entailment relations. For example, if "T"-"H" is "YES", and "H"-"T" is "NO", then the entailment result between them is "forward"; if "T"-"H" is "NO", and "H"-"T" is "YES", then the entailment result between them is "backward"; if both "T"-"H" and "H"-"T" are "YES", the result is "bidirectional"; otherwise "no_entailment".
Experiments and Results
Since organizers of SemEval 2012 task 8 supply a piece of data for training, we thus exploit it to optimize parameters for EDITS. Table 1 shows the Fmeasure score of training set analyzed by EDITS, where "FE" represents French-English, "SE" represents Spanish-English, "IE" represents Italian-English and "GE" represents Italian-English. From Table 1, we can see that the performance of "forward" prediction is lower than others. One explanation is that the "T" is translated from foreign language, which is error unavoidable. Thus some rules used for checking "T", such as stopword list will be disabled. Then it is possible to induce a "NO" relation between "T" and "H" that results in lower recall of "forward".
Since for French-English, we build a system combination for improving the quality of translation. Table 2 shows the results of BLEU score of translation quality, and F-score of entailment judgment.
System
BLEU4 F-score HPB GTT COMB 28.74 30.08 30.57 0.496 0.508 0.516 Table 2: Performance of different translation model, where COMB represents system combination.
From table 2, we find that the translation quality slightly affect the correctness of entailment judgment. However, the difference of performance in entailment judgment is smaller than that in translation quality. We explain that the translation models exploit phrase-based rules to direct the translation, and the translation errors mainly come from the disorder between each phrases. While a distance based entailment model generally consid-ers the similarity of phrases between test and hypothesis, thus the disorder of phrases influences the judgment slightly.
Using the given training data for tuning parameters , table 3 to table 6 shows the detailed experimental results on testing data, where P represents precision and R indicates recall, and both of them are calculated by given evaluation script. Results in table 7 and table 8 shows that models trained on "CLTE" have better performance than those trained on RTE1 and RTE2, except "bidirectional" judgment type. In Table 9, all results decoding by models trained on "CLTE" are the best. And in Table 10, only a few results decoding by models trained on "RTE1" and "RTE2" have higher score. The reason may be that, the test corpora are bilingual, there are some errors in the machine translation procedure when translate one part of the test from its language into the other. When training on these bilingual text and decoding these bilingual text, these two procedure have error consistency. Some errors may be counteracted. If we train on RTE, a standard monolingual text, and decode a bilingual text, more errors may exist between the two procedures. So we believe that, if we use translation based strategy (machine translation and monolingual textual entailment) to generate cross-lingual textual entailment, we should use translation based strategy to train models, rather than use standard monolingual texts.
French --English
Conclusion
In this paper, we demonstrate our system framework for this year's cross-lingual textual entailment task. We propose a translation based model to address cross-lingual entailment. We first translate all foreign languages into English, and then employ EDITS to induce entailment relations. Experiments show that our method achieves promising results but not perfect results compared to other participants.
Figure 2 :
2An Example of two expressions EDITS can recognize.
Figure 3 :
3Our configured file for training
Test results on German-English After given golden testing reference, we also investigate the effect of training set to testing set. We choose testing set from RTE1 and RTE2, both are English text, as our training set for optimization of EDITS, and the overall results are shown intable 7 to table 10, where CLTE is training set given by this year's organizers. : Test results on French-English given different training set. : Test results on Spanish-English given different training set. : Test results on Italian-English given different training set. Table 10: Test results on German-English given different training set.Judgment
P
R
F-measure
forward
backward
no_entailment
bidirectional
Overall
Best System
0.750
0.517
0.385
0.444
0.192
0.496
0.656
0.480
0.306
0.506
0.485
0.462
0.456
0.570
Table 3: Test results on French-English
Spanish --English
Judgment
P
R
F-measure
forward
backward
no_entailment
bidirectional
Overall
Best System
0.750
0.440
0.395
0.436
0.240
0.472
0.560
0.520
0.364
0.456
0.464
0.474
0.448
0.632
Table 4: Test results on Spanish-English
Italian -English
Judgment
P
R
F-measure
forward
backward
no_entailment
bidirectional
Overall
Best System
0.661
0.554
0.427
0.383
0.296
0.368
0.448
0.704
0.409
0.442
0.438
0.496
0.454
0.566
Table 5: Test results on Italian-English
German -English
Judgment
P
R
F-measure
forward
backward
no_entailment
bidirectional
Overall
Best System
0.718
0.493
0.390
0.439
0.224
0.552
0.512
0.552
0.341
0.521
0.443
0.489
0.460
0.558
Table 6: French --English
Judgment
CLTE RTE1
RTE2
forward
backward
no_entailment
bidirectional
Overall
0.306
0.506
0.485
0.462
0.456
0.248
0.425
0.481
0.472
0.430
0.289
0.440
0.485
0.485
0.444
Table 7Spanish -English
Judgment
CLTE RTE1
RTE2
forward
backward
no_entailment
bidirectional
Overall
0.364
0.456
0.464
0.474
0.448
0.293
0.332
0.386
0.484
0.400
0.297
0.372
0.427
0.503
0.424
Table 8Italian --English
Judgment
CLTE RTE1
RTE2
forward
backward
no_entailment
bidirectional
Overall
0.409
0.442
0.438
0.496
0.454
0.333
0.394
0.410
0.474
0.420
0.335
0.436
0.421
0.480
0.432
Table 9German -English
Judgment
CLTE RTE1
RTE2
forward
backward
no_entailment
bidirectional
Overall
0.341
0.521
0.443
0.489
0.460
0.377
0.372
0.437
0.487
0.434
0.425
0.460
0.457
0.508
0.470
http://edits.fbk.eu/
http://www.statmt.org/wmt12/
AcknowledgmentsThe authors were supported by National Science Foundation of China, Contracts 90920004, and High-Technology R&D Program (863) Project No 2011AA01A207 and 2012BAH39B03. We thank organizers for their generous supplied resources and arduous preparation. We also thank anonymous reviewers for their thoughtful suggestions.ReferencesBar-Haim, R., Dagan, I., Dolan, B., Ferro, L., Giampiccolo, D., Magnini, B., & Szpektor, I. 2006.The 2nd PASCAL recognising textual entailment challenge. In Proc. of the 2nd PASCAL ChallengesWorkshop on Recognising Textual Entailment, Venice, Italy.
Recognising textual entailment with logical inference. J Bos, K Markert, Proc. Of the Conf. on HLT and EMNLP. Of the Conf. on HLT and EMNLPVancouver, BC, CanadaBos, J., & Markert, K. 2005. Recognising textual en- tailment with logical inference. In Proc. Of the Conf. on HLT and EMNLP, pp. 628-635, Vancouver, BC, Canada.
Recognizing textual entailment: Rational,evaluation and approaches. I Dagan, B Dolan, B Magnini, D Roth, Editorial of the special issue on Textual Entailment. 154Nat. Lang. EngineeringDagan, I., Dolan, B., Magnini, B., & Roth, D. 2009. Recognizing textual entailment: Rational,evaluation and approaches. Nat. Lang. Engineering, 15(4), i- xvii. Editorial of the special issue on Textual Entail- ment.
A hierarchical phrase-based model for statistical machine translation. David Chiang, Proceedings of ACL 2005. ACL 2005David Chiang. 2005. A hierarchical phrase-based model for statistical machine translation. In Proceedings of ACL 2005, pages 263-270.
The fourth PASCAL recognizing textual entailment challenge. D Giampiccolo, H Dang, B Magnini, I Dagan, B ; Dolan, D Giampiccolo, B Magnini, I Dagan, B Dolan, Proc. of the ACL-Pascal Workshop on Textual Entailment and Paraphrasing. of the ACL-Pascal Workshop on Textual Entailment and ParaphrasingGaithersburg, MD; Prague, Czech RepublicProc. of the Text Analysis ConferenceGiampiccolo, D., Dang, H., Magnini, B., Dagan, I., & Dolan, B. 2008. The fourth PASCAL recognizing tex- tual entailment challenge. In Proc. of the Text Anal- ysis Conference, pp. 1-9, Gaithersburg, MD. Giampiccolo, D., Magnini, B., Dagan, I., & Dolan, B. 2007. The third PASCAL recognizing textual entail- ment challenge. In Proc. of the ACL-Pascal Work- shop on Textual Entailment and Paraphrasing, pp. 1- 9, Prague, Czech Republic.
Probabilistic Textual Entailment: Generic Applied Modeling of Language Variability. I Dagan, O Glickman, Proceedings of the PASCAL Workshop of Learning Methods for Text Understanding and Mining. the PASCAL Workshop of Learning Methods for Text Understanding and MiningI. Dagan and O. Glickman.2004. Probabilistic Textual Entailment: Generic Applied Modeling of Language Variability. Proceedings of the PASCAL Workshop of Learning Methods for Text Understanding and Mining.
A Survey of Paraphrasing and Textual Entailment Methids. Journal of Artificial Intelligence Research. 32Ion Androutsopoulos and Prodromos MalakasiotisIon Androutsopoulos and Prodromos Malakasiotis. 2010.A Survey of Paraphrasing and Textual Entail- ment Methids. Journal of Artificial Intelligence Re- search, 32, 135-187.
An open-source package for recognizing textual entailment. M Kouylekov, M Negri, Proceedings of the ACL 2010 System Demonstrations. the ACL 2010 System DemonstrationsKouylekov, M. and Negri, M. 2010. An open-source package for recognizing textual entailment. Proceed- ings of the ACL 2010 System Demonstrations, 42-47.
Dependency Parsing. S Kubler, R Mcdonald, J Nivre, Synthesis Lectures on HLT. Morgan and Claypool PublishersKubler, S., McDonald, R., & Nivre, J. 2009. Dependen- cy Parsing. Synthesis Lectures on HLT. Morgan and Claypool Publishers.
Binary codes capable of correcting deletions, insertions, and reversals. V Levenshtein, Soviet Physice-Doklady. 10Levenshtein, V. 1966. Binary codes capable of correct- ing deletions, insertions, and reversals. Soviet Physice-Doklady, 10, 707-710.
An information-theoretic definition of similarity. D Lin, Proc. of the 15th Int. Conf. on Machine Learning. of the 15th Int. Conf. on Machine LearningMadison, WI; San Francisco, CAMorgan KaufmannLin, D. 1998b. An information-theoretic definition of similarity. In Proc. of the 15th Int. Conf. on Machine Learning, pp. 296-304, Madison, WI. Morgan Kaufmann, San Francisco, CA.
Learning textual entailment using SVMs and string similarity measures. P Malakasiotis, I Androutsopoulos, Proc. of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. of the ACL-PASCAL Workshop on Textual Entailment and ParaphrasingPrague. ACLMalakasiotis, P., & Androutsopoulos, I. 2007. Learning textual entailment using SVMs and string similarity measures. In Proc. of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 42-47, Prague. ACL.
Towards Cross-Lingual Textual Entailment. Human Language Technologies.The 2010 Annual Conference of the NAACL. Y Mehdad, M Negri, M Federico, Mehdad, Y. and Negri, M. and Federico, M.2010. To- wards Cross-Lingual Textual Entailment. Human Language Technologies.The 2010 Annual Confer- ence of the NAACL. 321-324.
Using bilingual parallel corpora for cross-lingual textual entailment. Y Mehdad, M Negri, M Federico, Proceedings of ACL-HLT. ACL-HLTMehdad, Y. and Negri, M. and Federico, M.2011. Using bilingual parallel corpora for cross-lingual textual entailment. Proceedings of ACL-HLT
Dependency Syntax: Theory and Practice. I Melcuk, State University of New York PressMelcuk, I. 1987. Dependency Syntax: Theory and Prac- tice. State University of New York Press.
Semeval-2012 Task 8: Crossligual Textual Entailment for Content Synchronizatio n. M Negri, A Marchetti, Y Mehdad, L Bentivogli, D Giampiccolo, Proceedings of the 6th International Workshop on Semantic Evaluation. the 6th International Workshop on Semantic EvaluationM. Negri, A. Marchetti, Y. Mehdad, L. Bentivogli, and D. Giampiccolo.2012. Semeval-2012 Task 8: Cross- ligual Textual Entailment for Content Synchronizatio n. In Proceedings of the 6th International Workshop on Semantic Evaluation (SemEval 2012).
Divide and conquer: crowdsourcing the creation of cross-lingual textual entailment corpora. M Negri, L Bentivogli, Y Mehdad, D Giampiccolo, A Marchetti, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingNegri, M. and Bentivogli, L. and Mehdad, Y. and Giampiccolo, D. and Marchetti, A.2011. Divide and conquer: crowdsourcing the creation of cross-lingual textual entailment corpora. Proceedings of the Con- ference on Empirical Methods in Natural Language Processing.
Dependency-based construction of semantic space models. S Pad´o, M Lapata, Comp. Ling. 332Pad´o, S., & Lapata, M. 2007. Dependency-based con- struction of semantic space models. Comp. Ling., 33(2), 161-199.
Statistical phrase-based translation. Philipp Koehn, Franz J Och, Daniel Marcu, Proceedings of the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics. the 2003 Human Language Technology Conference of the North American Chapter of the Association for Computational LinguisticsEdmonton, CanadaPhilipp Koehn, Franz J. Och, and Daniel Marcu. 2003. Statistical phrase-based translation. In Proceedings of the 2003 Human Language Technology Confer- ence of the North American Chapter of the Associa- tion for Computational Linguistics, Edmonton, Canada, July.
Exploiting paraphrases in a question answering system. F Rinaldi, J Dowdall, K Kaljurand, M Hess, D Molla, Proc. of the 2nd Int. Workshop in Paraphrasing. of the 2nd Int. Workshop in ParaphrasingSaporo, JapanRinaldi, F., Dowdall, J., Kaljurand, K., Hess, M., & Molla, D. 2003. Exploiting paraphrases in a question answering system. In Proc. of the 2nd Int. Workshop in Paraphrasing, pp. 25-32, Saporo, Japan.
Improved word-level system combination for machine translation. A Rosti, S Matsoukas, R Schwartz, ASSOCIATION FOR COMPUTATIONAL LINGUISTICSRosti, A. and Matsoukas, S. and Schwartz, R. Improved word-level system combination for machine transla- tion, ANNUAL MEETING-ASSOCIATION FOR COMPUTATIONAL LINGUISTICS,2007
A semantic approach to recognizing textual entailment. M Tatu, D Moldovan, Proc. of the Conf. on HLT and EMNLP. of the Conf. on HLT and EMNLPVancouver, CanadaTatu, M., & Moldovan, D. 2005. A semantic approach to recognizing textual entailment. In Proc. of the Conf. on HLT and EMNLP, pp. 371-378, Vancouver, Canada.
COGEX at RTE 3. M Tatu, D Moldovan, Proc. of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing. of the ACL-PASCAL Workshop on Textual Entailment and ParaphrasingPrague, Czech RepublicTatu, M., & Moldovan, D. 2007. COGEX at RTE 3. In Proc. of the ACL-PASCAL Workshop on Textual Entailment and Paraphrasing, pp. 22-27, Prague, Czech Republic.
Tree-to string alignment template for statistical machine translation. Yang Liu, Qun Liu, Shouxun Lin, Proceedings of ACL 2006. ACL 2006Sydney, AustraliaYang Liu, Qun Liu, and Shouxun Lin. 2006. Tree-to string alignment template for statistical machine translation. In Proceedings of ACL 2006, pages 609- 616, Sydney, Australia, July. |
624,738 | A Formal Model for Information Selection in Multi-Sentence Text Extraction | Selecting important information while accounting for repetitions is a hard task for both summarization and question answering. We propose a formal model that represents a collection of documents in a two-dimensional space of textual and conceptual units with an associated mapping between these two dimensions. This representation is then used to describe the task of selecting textual units for a summary or answer as a formal optimization task. We provide approximation algorithms and empirically validate the performance of the proposed model when used with two very different sets of features, words and atomic events. | [
11419428,
6627923,
11680756,
11846745,
10827006
] | A Formal Model for Information Selection in Multi-Sentence Text Extraction
Elena Filatova filatova@cs.columbia.edu
Department of Computer Science
Center for Computational Learning Systems
Columbia University New York
10027NYUSA
Vasileios Hatzivassiloglou
Columbia University
10027New YorkNYUSA
A Formal Model for Information Selection in Multi-Sentence Text Extraction
Selecting important information while accounting for repetitions is a hard task for both summarization and question answering. We propose a formal model that represents a collection of documents in a two-dimensional space of textual and conceptual units with an associated mapping between these two dimensions. This representation is then used to describe the task of selecting textual units for a summary or answer as a formal optimization task. We provide approximation algorithms and empirically validate the performance of the proposed model when used with two very different sets of features, words and atomic events.
Introduction
Many natural language processing tasks involve the collection and assembling of pieces of information from multiple sources, such as different documents or different parts of a document. Text summarization clearly entails selecting the most salient information (whether generically or for a specific task) and putting it together in a coherent summary. Question answering research has recently started examining the production of multi-sentence answers, where multiple pieces of information are included in the final output.
When the answer or summary consists of multiple separately extracted (or constructed) phrases, sentences, or paragraphs, additional factors influence the selection process. Obviously, each of the selected text snippets should individually be important. However, when many of the competing passages are included in the final output, the issue of information overlap between the parts of the output comes up, and a mechanism for addressing redundancy is needed. Current approaches in both summarization and long answer generation are primarily oriented towards making good decisions for each potential part of the output, rather than examining whether these parts overlap. Most current methods adopt a statistical framework, without full semantic analysis of the selected content passages; this makes the comparison of content across multiple selected text passages hard, and necessarily approximated by the textual similarity of those passages.
Thus, most current summarization or longanswer question-answering systems employ two levels of analysis: a content level, where every textual unit is scored according to the concepts or features it covers, and a textual level, when, before being added to the final output, the textual units deemed to be important are compared to each other and only those that are not too similar to other candidates are included in the final answer or summary. This comparison can be performed purely on the basis of text similarity, or on the basis of shared features that may be the same as the features used to select the candidate text units in the first place.
In this paper, we propose a formal model for integrating these two tasks, simultaneously performing the selection of important text passages and the minimization of information overlap between them. We formalize the problem by positing a textual unit space, from which all potential parts of the summary or answer are drawn, a conceptual unit space, which represents the distinct conceptual pieces of information that should be maximally included in the final output, and a mapping between conceptual and textual units. All three components of the model are application-and task-dependent, allowing for different applications to operate on text pieces of different granularity and aim to cover different conceptual features, as appropriate for the task at hand. We cast the problem of selecting the best textual units as an optimization problem over a general scoring function that measures the total coverage of conceptual units by any given set of textual units, and provide general algorithms for obtaining a solution.
By integrating redundancy checking into the selection of the textual units we provide a unified framework for addressing content overlap that does not require external measures of similarity between textual units. We also account for the partial overlap of information between textual units (e.g., a single shared clause), a situation which is common in nat-ural language but not handled by current methods for reducing redundancy.
Formal Model for Information Selection and Packing
Our model for selecting and packing information across multiple text units relies on three components that are specified by each application. First, we assume that there is a finite set T of textual units t 1 , t 2 , . . . , t n , a subset of which will form the answer or summary. For most approaches to summarization and question answering, which follow the extraction paradigm, the textual units t i will be obtained by segmenting the input text(s) at an application-specified granularity level, so each t i would typically be a sentence or paragraph. Second, we posit the existence of a finite set C of conceptual units c 1 , c 2 , . . . , c m . The conceptual units encode the information that should be present in the output, and they can be defined in different ways according to the task at hand and the priorities of each system. Obviously, defining the appropriate conceptual units is a core problem, akin to feature selection in machine learning: There is no exact definition of what an important concept is that would apply to all tasks. Current summarization systems often represent concepts indirectly via textual features that give high scores to the textual units that contain important information and should be used in the summary and low scores to those textual units which are not likely to contain information worth to be included in the final output. Thus, many summarization approaches use as conceptual units lexical features like tf*idf weighing of words in the input text(s), words used in the titles and section headings of the source documents (Luhn, 1959;H.P.Edmundson, 1968), or certain cue phrases like significant, important and in conclusion (Kupiec et al., 1995;Teufel and Moens, 1997). Conceptual units can also be defined out of more basic conceptual units, based on the co-occurrence of important concepts (Barzilay and Elhadad, 1997) or syntactic constraints between representations of concepts (Hatzivassiloglou et al., 2001). Conceptual units do not have to be directly observable as text snippets; they can represent abstract properties that particular text units may or may not satisfy, for example, status as a first sentence in a paragraph or generally position in the source text (Lin and Hovy, 1997). Some summarization systems assume that the importance of a sentence is derivable from a rhetorical representation of the source text (Marcu, 1997), while others leverage information from multiple texts to re-score the importance of conceptual units across all the sources (Hatzivassiloglou et al., 2001).
No matter how these important concepts are defined, different systems use text-observable features that either correspond to the concepts of interest (e.g., words and their frequencies) or point out those text units that potentially contain important concepts (e.g., position or discourse properties of the text unit in the source document). The former class of features can be directly converted to conceptual units in our representation, while the latter can be accounted for by postulating abstract conceptual units associated with a particular status (e.g., first sentence) for a particular textual unit. We assume that each conceptual unit has an associated importance weight w i that indicates how important unit c i is to the overall summary or answer.
A first model: Full correspondence
Having formally defined the sets T and C of textual and conceptual units, the part that remains in order to have the complete picture of the constraints given by the data and summarization approach is the mapping between textual units and conceptual units. This mapping, a function f : T ×C → [0, 1], tells us how well each conceptual unit is covered by a given textual unit. Presumably, different approaches will assign different coverage scores for even the same sentences and conceptual units, and the consistency and quality of these scores would be one way to determine the success of each competing approach.
We first examine the case where the function f is limited to zero or one values, i.e., each textual unit either contains/matches a given conceptual feature or not. This is the case with many simple features, such as words and sentence position. Then, we define the total information covered by any given subset S of T (a proposed summary or answer) as
I(S) = i=1,...,m w i · δ i (1)
where w i is the weight of the concept c i and
δ i = 1, if ∃j ∈ {1, . . . , m} such that f (t j , c i ) = 1 0, otherwise
In other words, the information contained in a summary is the sum of the weights of the conceptual units covered by at least one of the textual units included in the summary.
Partial correspondence between textual
and conceptual units Depending on the nature of the conceptual units, the assumption of a 0-1 mapping between textual and conceptual units may or may not be practical or even feasible. For many relatively simple representations of concepts, this restriction poses no difficulties: the concept is uniquely identified and can be recognized as present or absent in a text passage. However, it is possible that the concepts have some structure and can be decomposed to more elementary conceptual units, or that partial matches between concepts and text are natural. For example, if the conceptual units represent named entities (a common occurrence in list-type long answers), a partial match between a name found in a text and another name is possible; handling these two names as distinct concepts would be inaccurate. Similarly, an event can be represented as a concept with components corresponding to participants, time, location, and action, with only some of these components found in a particular piece of text.
Partial matches between textual and conceptual units introduce a new problem, however: if two textual units partially cover the same concept, it is not apparent to what extent the coverage overlaps. Thus, there are multiple ways to revise equation (1) in order to account for partial matches, depending on how conservative we are on the expected overlap. One such way is to assume minimum overlap (the most conservative assumption) and define the total information in the summary as
I(S) = i=1,...,m w i · max j f (t j , c i )(2)
An alternative is to consider that f (t j , c i ) represents the extent of the [0, 1] interval corresponding to concept c i that t j covers, and assume that the coverage is spread over that interval uniformly and independently across textual units. Then the combined coverage of two textual units t j and t k is
f (t j , c i ) + f (t k , c i ) − f (t j , c i ) · f (t k , c i )
This operator can be naturally extended to more than two textual units and plugged into equation (2) in the place of the max operator, resulting into an equation we will refer to as equation (3). Note that both of these equations reduce to our original formula for information content (equation (1)) if the mapping function f only produces 0 and 1 values.
Length and textual constraints
We have provided formulae that measure the information covered by a collection of textual units under different mapping constraints. Obviously, we want to maximize this information content. However, this can only sensibly happen when additional constraints on the number or length of the selected textual units are introduced; otherwise, the full set of available textual units would be a solution that proffers a maximal value for equations (1)-(3), i.e., ∀S ⊂ T, I(S) ≤ I(T ). We achieve this by assigning a cost p i to each textual unit t i , i = 1, . . . , n, and defining a function P over a set of textual units that provides the total penalty associated with selecting those textual units as the output. In our abstraction, replacing a textual unit with one or more textual units that provide the same content should only affect the penalty, and it makes sense to assign the same cost to a long sentence as to two sentences produced by splitting the original sentence. Also, a shorter sentence should be preferable to a longer sentence with the same information content. Hence, our operational definitions for p i and P are
p i = length(t i ), P (S) = t i ∈S p i
i.e., the total penalty is equal to the total length of the answer in some basic unit (e.g., words).
Note however, than in the general case the p i 's need not depend solely on the length, and the total penalty does not need to be a linear combination of them. The cost function can depend on features other then length, for example, number of pronouns-the more pronouns used in a textual unit, the higher the risk of dangling references and the higher the price should be. Finding the best cost function is an interesting research problem by itself.
With the introduction of the cost function P (S) our model has two generally competing components. One approach is to set a limit on P (S) and optimize I(S) while keeping P (S) under that limit. This approach is similar to that taken in evaluations that keep the length of the output summary within certain bounds, such as the recent major summarization evaluations in the Document Understanding Conferences from 2001 to the present (Harman and Voorhees, 2001). Another approach would be to combine the two components and assign a composite score to each summary, essentially mandating a specific tradeoff between recall and precision; for example, the total score can be defined as a linear combination of I(S) and P (S), in which case the weights specify the relative importance of coverage and precision/brevity, as well as accounting for scale differences between the two metrics. This approach is similar to the calculation of recall, precision, and F-measure adopted in the recent NIST evaluation of long answers for definitional questions (Voorhees, 2003). In this paper, we will follow the first tactic of maximizing I(S) with a limit on P (S) rather than attempting to solve the thorny issues of weighing the two components appropriately.
Handling Redundancy in Summarization
Redundancy of information has been found useful in determining what text pieces should be included during summarization, on the basis that information that is repeated is likely to be central to the topic or event being discussed. Earlier work has also recognized that, while it is a good idea to select among the passages repeating information, it is also important to avoid repetition of the same information in the final output. Two main approaches have been proposed for avoiding redundancy in the output. One approach relies on grouping together potential output text units on the basis of their similarity, and outputting only a representative from each group (Hatzivassiloglou et al., 2001). Sentences can be clustered in this manner according to word overlap, or by using additional content similarity features. This approach has been recently applied to the construction of paragraph-long answers (e.g., (Blair-Goldensohn et al., 2003;Yu and Hatzivassiloglou, 2003)).
An alternative approach, proposed for the synthesis of information during query-based passage retrieval is the maximum marginal relevance (MMR) method (Goldstein et al., 2000). This approach assigns to each potential new sentence in the output a similarity score with the sentences already included in the summary. Only those sentences that contain a substantial amount of new information can get into the summary. MMR bases this similarity score on word overlap and additional information about the time when each document was released, and thus can fail to identify repeated information when paraphrasing is used to convey the same meaning.
In contrast to these approaches, our model handles redundancy in the output at the same time it selects the output sentences. It is clear from equations (1)-(3) that each conceptual unit is counted only once whether it appears in one or multiple textual units. Thus, when we find the subset of textual units that maximizes overall information coverage with a constraint on the total number or length of textual units, the model will prefer the collection of textual units that have minimal overlap of covered conceptual units. Our approach offers three advantages versus both clustering and MMR: First, it integrates redundancy elimination into the selection process, requiring no additional features for defining a text-level similarity between selected textual units. Second, decisions are based on the same features that drive the summarization itself, not on ad-ditional surface properties of similarity. Finally, because all decisions are informed by the overlap of conceptual units, our approach accounts for partial overlap of information across textual units. To illustrate this last point, consider a case where three features A, B, and C should be covered in the output, and where three textual units are available, covering A and B, A and C, and B and C, respectively. Then our model will determine that selecting any two of the textual units is fully sufficient, while this may not be apparent on the basis of text similarity between the three text units; a clustering algorithm may form three singleton clusters, and MMR may determine that each textual unit is sufficiently different from each other, especially if A, B, and C are realized with nearly the same number of words.
Applying the Model
Having presented a formal metric for the information content (and optionally the cost) of any potential summary or answer, the task that remains is to optimize this metric and select the corresponding set of textual units for the final output. As stated in Section 2.3, one possible way to do this is to focus on the information content metric and introduce an additional constraint, limiting the total cost to a constant. An alternative is to optimize directly the composite function that combines cost and information content into a single number.
We examine the case of zero-one mappings between textual and conceptual units, where the total information content is specified by equation (1). The complexity of the problem depends on the cost function, and whether we optimize I(S) while keeping P (S) fixed or whether we optimize a combined function of both of those quantities. We will only consider the former case in the present paper. We start by examining an artificially simple case, where the cost assigned to each textual unit is 1, and the function P for combining costs is their sum. In this case, the total cost is equal to the number of textual units used in a summary.
This problem, as we have formalized it above, is identical to the Maximum Set Coverage problem studied in theoretical computer science: given C, a finite set of weighted elements, a collection T of subsets of C, and an integer k, find those k sets that maximize the total number of elements in the union of T 's members (Hochbaum, 1997). In our case, the zero-one mapping allows us to view each textual unit as a subset of the conceptual units space, containing those conceptual units covered by the textual unit, and k is the total target cost. Unfortunately, maximum set coverage is NP-hard, as it is reducible to the classic set cover problem (given a finite set and a collection of subsets of that set, find the smallest subset of that collection whose members' union is equal to the original set) (Hochbaum, 1997). It follows that more general formulations of the cost function that actually are more realistic for our problem (such as defining the total cost as the sum of the lengths of the selected textual units and allowing the textual units to have different lengths) will also result in an NP-hard problem, as we can reduce these versions to the special case of maximum set coverage.
Nevertheless, the correspondence with maximum set coverage provides a silver lining. Since the problem is known to be NP-hard, properties of simple greedy algorithms have been explored, and a straightforward local maximization method has been proved to give solutions within a known bound of the optimal solution. The greedy algorithm for maximum set coverage has as follows: Start with an empty solution S, and iteratively add to the S the set T i that maximizes I(S ∪ T i ). It is provable that this algorithm is the best polynomial approximation algorithm for the problem (Hochbaum, 1997), and that it achieves a solution bounded as follows
I(OPT) ≥ I(GREEDY) ≥ 1 − 1 − 1 k k I(OPT) > 1 − 1 e I(OPT) ≈ 0.6321 × I(OPT)
where I(OPT) is the information content of the optimal summary and I(GREEDY) is the information content of the summary produced by this greedy algorithm.
For the more realistic case where cost is specified as the total length of the summary, and where we try to optimize I(S) with a limit on P (S) (see Section 2.3), we propose two greedy algorithms inspired by the algorithm above. Both our algorithms operate by first calculating a ranking of the textual units in decreasing order. This ranking is for the first algorithm, which we call adaptive greedy algorithm, identical to the ranking provided by the basic greedy algorithm, i.e., each textual unit receives as score the increase in I(S) that it generates when added to the output, in the order specified by the basic greedy algorithm. Our second greedy algorithm (dubbed modified greedy algorithm below) modifies this ranking by prioritizing the conceptual units with highest individual weight w i ; it ranks first the textual unit that has the highest contribution to I(S) while covering this conceptual unit with the highest individual weight, and then iteratively proceeds with the textual unit that has the highest contribution to I(S) while covering the next most important unaccounted for conceptual unit.
Given the rankings of textual units, we can then produce an output of a given length by adopting appropriate stopping criteria for when to stop adding textual units (in order according to their ranking) to the output. There is no clear rule for conforming to a specific length (for example, DUC 2001 allowed submitted summaries to go over "a reasonable percentage" of the target length, while DUC 2004 cuts summaries mid-sentence at exactly the target length). As the summary length in DUC is measured in words, in our experiments we extracted the specified number of words out of the top sentences (truncating the last sentence if necessary).
Experiments
To empirically establish the effectiveness of the presented model we ran experiments comparing evaluation scores on summaries obtained with a baseline algorithm that does not account for redundancy of information and with the two variants of greedy algorithms described in Section 4. We chose summarization as the evaluation task because "ideal" output (prepared by humans) and methods for scoring arbitrary system output were available for this task, but not for evaluating long answers to questions.
Data We chose as our input data the document sets used in the evaluation of multidocument summarization during the Document Understanding Conference (DUC), organized by NIST in 2001 (Harman andVoorhees, 2001). This collection contains 30 test document sets, each containing approximately 10 news stories on different events; document sets vary significantly in their internal cohereness. For each document set 12 human-constructed summaries are provided, 3 for each of the target lengths of 50, 100, 200, and 400 words. We selected DUC 2001 because unlike later DUCs, ideal summaries are available for multiple lengths. We consider sentences as our textual units.
Features
In our experiments we used two sets of features (i.e., conceptual units). First, we chose a fairly basic and widely used set of lexical features, namely the list of words present in each input text. We set the weight of each feature to its tf*idf value, taking idf values from http://elib.cs.
berkeley.edu/docfreq/.
Our alternative set of conceptual units was the list of weighted atomic events extracted from the input texts. An atomic event is a triplet consisting of two named entities extracted from a sentence and a connector expressed by a verb or an event-related noun that appears in-between these two named entities.
The score of the atomic event depends on the frequency of the named entities pair for the input text and the frequency of the connector for that named entities pair. Filatova and Hatzivassiloglou (2003) define the procedure for extracting atomic events in detail, and show that these triplets capture the most important relations connecting the major constituent parts of events, such as location, dates and participants. Our hypothesis is that using these events as conceptual units would provide a reasonable basis for summarizing texts that are supposed to describe one or more events.
Evaluation Metric Given the difficulties in coming up with a universally accepted evaluation measure for summarization, and the fact that judgments by humans are time-consuming and labor-intensive, we adopted an automated process for comparing system-produced summaries to the ideal summaries written by humans. The ROUGE method (Lin and Hovy, 2003) is based on n-gram overlap between the system-produced and ideal summaries. As such, it is a recall-based measure, and it requires that the length of the summaries be controlled in order to allow for meaningful comparisons. Although ROUGE is only a proxy measure of summary quality, it offers the advantage that it can be readily applied to compare the performance of different systems on the same set of documents, assuming that ideal summaries are available for those documents.
Baseline Our baseline method does not consider the overlap in information content between selected textual units. Instead, we fix the score of each sentence as the sum of tf*idf values or atomic event scores. At every step we choose the remaining sentence with the largest score, until the stopping criterion for summary length is satisfied.
Results
For every version of our baseline and approximation algorithms, and separately for the tf*idf -weighted words and event features, we get a sorted list of sentences extracted according to a particular algorithm. Then, for each DUC document set we create four summaries of each suggested length (50, 100, 200, and 400 words) by extracting accordingly the first 50, 100, 200, and 400 words from the top sentences.
To evaluate the performance of our summarizers we compare their outputs against the human models of the corresponding length provided by DUC, using the ROUGE-created scores for unigrams. Since scores are not comparable across different document sets, instead of average scores we report the number of document sets for which one algorithm outperforms another. We compare each of our approximation algorithms (adaptive and modified greedy) to the baseline. Table 1 shows the number of data sets for which the adaptive greedy algorithm outperforms our baseline. This implementation of our information packing model improves the ROUGE scores in most cases when events are used as features, while the opposite is true when tf*idf provides the conceptual units. This may be partly explained because of the nature of the tf*idf -weighted word features: it is possible that important words cannot be considered independently, and that the repetition of important words in later sentence does not necessarily mean that the sentence offers no new information. Thus words may not provide independent enough features for our approach to work. Table 2 compares our modified greedy algorithm to the baseline. In that case, the model offers gains in performance when both events and words are used as features, and in fact the gains are most pronounced with the word features. For both algorithms, the gains are generally minimal for 50 word summaries and most pronounced for the longest, 400 word summaries. This validates our approach, as the information packing model has a limited opportunity to alter the set of selected sentences when those sentences are very few (often one or two for the shortest summaries).
It is worth noting that in direct comparisons between the adaptive and modified greedy algorithm we found the latter to outperform the former. We found also events to lead to better performance than tf*idf -weighted words with statistically significant differences. Events tend to be a particularly good representation for document sets with well-defined constituent parts (such as specific participants) that cluster around a narrow event. Events not only give us a higher absolute performance when compared to just words but also lead to more pronounced improvement when our model is employed. A more detailed analysis of the above experiments together with the discussion of advantages and disadvantages of our evaluation schema can be found in (Filatova and Hatzivassiloglou, 2004).
Conclusion
In this paper we proposed a formal model for information selection and redundancy avoidance in summarization and question-answering. Within this two-dimensional model, summarization and question-answering entail mapping textual units onto conceptual units, and optimizing the selection of a subset of textual units that maximizes the information content of the covered conceptual units. The formalization of the process allows us to benefit from theoretical results, including suitable approximation algorithms. Experiments using DUC data showed that this approach does indeed lead to improvements due to better information packing over a straightforward content selection method.
Table 1: Adaptive greedy algorithm versus baseline.Length Events tf*idf
50
+3
0
100
+4
−4
200
+2
−4
400
+5
0
Length Events tf*idf
50
0
+ 7
100
+4
+ 4
200
+8
+ 6
400
+2
+14
Table 2 :
2Modified greedy algorithm versus baseline.
AcknowledgementsWe wish to thank Rocco Servedio and Mihalis Yannakakis for valuable discussions of theoretical foundations of the set cover problem. This work was supported by ARDA under Advanced Question Answering for Intelligence (AQUAINT) project MDA908-02-C-0008.
Using lexical chains for text summarization. Regina Barzilay, Michael Elhadad, Proceedings of the ACL/EACL 1997 Workshop on Intelligent Scalable Text Summarization. the ACL/EACL 1997 Workshop on Intelligent Scalable Text SummarizationSpainRegina Barzilay and Michael Elhadad. 1997. Us- ing lexical chains for text summarization. In Pro- ceedings of the ACL/EACL 1997 Workshop on In- telligent Scalable Text Summarization, Spain.
Defscriber: A hybrid system for definitional qa. Sasha Blair-Goldensohn, Kathleen R Mckeown, Andrew Hazen Schlaikjer, Proceedings of 26th Annual International ACM SIGIR Conference. 26th Annual International ACM SIGIR ConferenceToronoto, CanadaSasha Blair-Goldensohn, Kathleen R. McKeown, and Andrew Hazen Schlaikjer. 2003. Defscriber: A hybrid system for definitional qa. In Proceed- ings of 26th Annual International ACM SIGIR Conference, Toronoto, Canada, July.
Domain-independent detection, extraction, and labeling of atomic events. Elena Filatova, Vasileios Hatzivassiloglou, Proceedings of Recent Advances in Natural Language Processing Conference. Recent Advances in Natural Language Processing ConferenceRANLP, BulgariaElena Filatova and Vasileios Hatzivassiloglou. 2003. Domain-independent detection, extraction, and labeling of atomic events. In Proceedings of Recent Advances in Natural Language Process- ing Conference, RANLP, Bulgaria.
Event-based extractive summarization. Elena Filatova, Vasileios Hatzivassiloglou, Proceedings of ACL Workshop on Summarization. ACL Workshop on SummarizationBarcelona, SpainElena Filatova and Vasileios Hatzivassiloglou. 2004. Event-based extractive summarization. In Proceedings of ACL Workshop on Summariza- tion, Barcelona, Spain, July.
Creating and evaluating multi-document sentence extract summaries. Jade Goldstein, Vibhu Mittal, Jaime Carbonell, Jamie Callan, Proceedings of the ninth international conference on Information and knowledge management. the ninth international conference on Information and knowledge managementJade Goldstein, Vibhu Mittal, Jaime Carbonell, and Jamie Callan. 2000. Creating and evaluat- ing multi-document sentence extract summaries. In Proceedings of the ninth international con- ference on Information and knowledge manage- ment, pages 165-172.
Proceedings of the Document Understanding Conference (DUC). NIST. Donna Harman and Ellen Voorheesthe Document Understanding Conference (DUC). NISTNew Orleans, USADonna Harman and Ellen Voorhees, editors. 2001. Proceedings of the Document Understanding Conference (DUC). NIST, New Orleans, USA.
Simfinder: A flexible clustering tool for summarization. Vasileios Hatzivassiloglou, Judith L Klavans, Melissa L Holcombe, Regina Barzilay, Min-Yen Kan, Kathleen R Mckeown, Proceedings of workshop on Automatic Summarization, NAACL. workshop on Automatic Summarization, NAACLPittsburg, USAVasileios Hatzivassiloglou, Judith L. Klavans, Melissa L. Holcombe, Regina Barzilay, Min- Yen Kan, and Kathleen R. McKeown. 2001. Simfinder: A flexible clustering tool for summa- rization. In Proceedings of workshop on Auto- matic Summarization, NAACL, Pittsburg, USA.
Approximating covering and packing problems: Set cover, vertex cover, independent set, and related problems. Dorit S Hochbaum, editor, Approximation Algorithms for NP-hard Problems. Dorit S. HochbaumBoston, MAPWS Publishing CompanyDorit S. Hochbaum. 1997. Approximating cov- ering and packing problems: Set cover, vertex cover, independent set, and related problems. In Dorit S. Hochbaum, editor, Approximation Al- gorithms for NP-hard Problems, pages 94-143. PWS Publishing Company, Boston, MA.
New methods in automatic extracting. H P Edmundson, Journal of the Association for Computing Machinery. 231H.P.Edmundson. 1968. New methods in automatic extracting. Journal of the Association for Com- puting Machinery, 23(1):264-285, April.
Automatic evaluation of summaries using n-gram cooccurrence statistics. Julian Kupiec, Jan Pedersen, Francine Chen, Proceedings of the 5th Conference on Applied Natural Language Processing. the 5th Conference on Applied Natural Language ProcessingSeattle, USA; Washington, DCProceedings ofJulian Kupiec, Jan Pedersen, and Francine Chen. 1995. A trainable document summarizer. In Pro- ceedings of 18th Annual International ACM SI- GIR Conference, pages 68-73, Seattle, USA. Chin-Yew Lin and Eduard Hovy. 1997. Identify- ing topic by position. In Proceedings of the 5th Conference on Applied Natural Language Pro- cessing, ANLP, Washington, DC. Chin-Yew Lin and Eduard Hovy. 2003. Auto- matic evaluation of summaries using n-gram co- occurrence statistics. In Proceedings of 2003
Language Technology Conference (HLT-NAACL 2003). Edmonton, Canada, MayLanguage Technology Conference (HLT-NAACL 2003), Edmonton, Canada, May.
The automatic creation of literature abstracts. H P Luhn, IBM Journal of Research and Development. 22H.P. Luhn. 1959. The automatic creation of litera- ture abstracts. IBM Journal of Research and De- velopment, 2(2):159-165, April.
From discourse structures to text summaries. Daniel Marcu, Proceedings of the ACL/EACL 1997 Workshop on Intelligent Scalable Text Summarization. the ACL/EACL 1997 Workshop on Intelligent Scalable Text SummarizationSpainDaniel Marcu. 1997. From discourse struc- tures to text summaries. In Proceedings of the ACL/EACL 1997 Workshop on Intelligent Scal- able Text Summarization, pages 82-88, Spain.
Sentence extraction as a classification task. Simone Teufel, Marc Moens, Proceedings of the ACL/EACL 1997 Workshop on Intelligent Scalable Text Summarizaion. the ACL/EACL 1997 Workshop on Intelligent Scalable Text SummarizaionSpainSimone Teufel and Marc Moens. 1997. Sentence extraction as a classification task. In Proceedings of the ACL/EACL 1997 Workshop on Intelligent Scalable Text Summarizaion, Spain.
Evaluating answers to definition questions. Ellen M Voorhees, Proceedings of HLT-NAACL. HLT-NAACLEdmonton, CanadaEllen M. Voorhees. 2003. Evaluating answers to definition questions. In Proceedings of HLT- NAACL, Edmonton, Canada, May.
Towards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. Hong Yu, Vasileios Hatzivassiloglou, Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). the Conference on Empirical Methods in Natural Language Processing (EMNLP)Sapporo, JapanHong Yu and Vasileios Hatzivassiloglou. 2003. To- wards answering opinion questions: Separating facts from opinions and identifying the polarity of opinion sentences. In Proceedings of the Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), Sapporo, Japan, July. |
15,646,153 | Finding Medical Term Variations using Parallel Corpora and Distributional Similarity | We describe a method for the identification of medical term variations using parallel corpora and measures of distributional similarity. Our approach is based on automatic word alignment and standard phrase extraction techniques commonly used in statistical machine translation. Combined with pattern-based filters we obtain encouraging results compared to related approaches using similar datadriven techniques. | [
9446241,
2281602,
7289480,
12776014,
38407095,
13183512,
2755801,
15698938
] | Finding Medical Term Variations using Parallel Corpora and Distributional Similarity
Ontolex 2010. August 2010
Lonneke Van Der Plas lonneke.vanderplas@unige.ch
Department of Linguistics
Department of Linguistics and Philology
University of Geneva
Uppsala University
Jörg Tiedemann jorg.tiedemann@lingfil.uu.se
Department of Linguistics
Department of Linguistics and Philology
University of Geneva
Uppsala University
Finding Medical Term Variations using Parallel Corpora and Distributional Similarity
Proceedings of the 6th Workshop on Ontologies and Lexical Resources
the 6th Workshop on Ontologies and Lexical ResourcesBeijingOntolex 2010. August 2010
We describe a method for the identification of medical term variations using parallel corpora and measures of distributional similarity. Our approach is based on automatic word alignment and standard phrase extraction techniques commonly used in statistical machine translation. Combined with pattern-based filters we obtain encouraging results compared to related approaches using similar datadriven techniques.
Introduction
Ontologies provide a way to formally represent knowledge, for example for a specific domain. Ontology building has received a lot of attention in the medical domain. This interest is reflected in the existence of numerous medical ontologies, such as the Unified Medical Language System (UMLS) (McCray and Hole, 1990) with its metathesaurus, semantic network, and specialist lexicon. Although the UMLS includes information for languages other than English, the coverage for other languages is generally smaller.
In this paper we describe an approach to acquire lexical information for the Dutch medical domain automatically. In the medical domain variations in terminology often include multi-word terms such as aangeboren afwijking 'birth defect' for congenitale aandoening 'congenital disorder'. These multiple ways to refer to the same concept using distinct (multi-word) terms are examples of synonymy 1 but are often referred to as term varia-tions. These term variations could be used to enhance existing medical ontologies for the Dutch language.
Our technique builds on the distributional hypothesis, the idea that semantically related words are distributed similarly over contexts (Harris, 1968). This is in line with the Firthian saying that, 'You shall know a word by the company it keeps.' (Firth, 1957). In other words, you can grasp the meaning of a word by looking at its contexts.
Context can be defined in many ways. Previous work has been mainly concerned with the syntactic contexts a word is found in (Lin, 1998;Curran, 2003). For example, the verbs that are in a subject relation with a particular noun form a part of its context. In accordance with the Firthian tradition these contexts can be used to determine the semantic relatedness of words. For instance, words that occur in a object relation with the verb to drink have something in common: they are liquid. Other work has been concerned with the bagof-word context, where the context of a word are the words that are found in its proximity (Wilks et al., 1993;Schütze, 1992).
Yet another context, that is much less studied, is the translational context. The translational context of a word is the set of translations it gets in other languages. For example, the translational context of cat is kat in Dutch and chat in French. This requires a rather broad understanding of the term context. The idea is that words that share a large number of translations are similar. For example both autumn and fall get the translation herfst in Dutch, Herbst in German, and automne in French. This indicates that autumn and fall are synonyms.
A straightforward place to start looking for translational context is in bilingual dictionaries. However, these are not always publicly available for all languages. More importantly, dictionaries are static and therefore often incomplete resources. We have chosen to automatically acquire word translations in multiple languages from text. Text in this case should be understood as multilingual parallel text. Automatic alignment gives us the translations of a word in multiple languages. The so-called alignment-based distributional methods described in Van der Plas (2008) apply the translational context for the discovery of single word synonyms for the general domain. Any multilingual parallel corpus can be used for this purpose. It is thus possible to focus on a special domain, such as the medical domain we are considering in this paper. The automatic alignment provides us also with domain-specific frequency information for every translation pair, which is helpful in case words are ambiguous.
Aligned parallel corpora have often been used in the field of word sense discovery, the task of discriminating the different senses words have. The idea behind it is that a word that receives different translations might be polysemous. For example, a word such as wood receives the translation woud and hout in Dutch, the former referring to an area with many trees and the latter referring to the solid material derived from trees. Whereas this type of work is all built upon the divergence of translational context, i.e. one word in the source language is translated by many different words in the target language, we are interested in the convergence of translations, i.e. two words in the source language receiving the same translation in the target language. Of course these two phenomena are not independent. The alleged conversion of the target language might well be a hidden diversion of the source language. Since the English word might be polysemous, the fact that woud and hout in Dutch are both translated in English by wood does not mean that woud and hout in Dutch are synonyms. However, the use of multiple languages overshadows the noise resulting from polysemy (van der Plas, 2008). Van der Plas (2008) shows that the way the context is defined influences the type of lexico-semantic knowledge that is discovered. After gold standard evaluations and manual inspection the author concludes that when using translational contexts more tight semantic relations such as synonymy are found whereas the conventional syntax-based approaches retrieve hypernyms, cohyponyms, and antonyms of the target word. The performance on synonym acquisition when using translational contexts is almost twice as good as when using syntactic contexts, while the amount of data used is much smaller. Van der Plas (2008) ascribed the fact that the syntax-based method behaves in this way to the fact that loosely related words, such as wine and beer, are often found in the same syntactic contexts. The alignment-based method suffers less from this indiscriminant acceptance because words are typically translated by words with the same meaning. The word wine is typically not translated with a word for beverage nor with a word for beer, and neither is good translated with the equivalence of bad.
In this paper we are concerned with medical term variations that are in fact (multi-word) synonyms. We will use the translational context to compute similarity between terms. The translational context is not only very suitable to find tight relations between words, the transition from single-word synonyms to multi-word term variations is also straightforward due to advances in phrase-based machine translation. We will use word alignment techniques in combination with phrase extraction techniques from statistical machine translation to extract phrases and their translations from a medical parallel corpus. We combine this approach with Part-of-Speech (PoS) patterns from the term extraction literature to extract candidate terms from the phrase tables. Using similarity measures used in distributional methods we finally compute ranked lists of term variations.
We already noted that these term variations could be used to enhance existing ontologies for the Dutch language. On top of that we believe that the multi-lingual method that uses translations of multi-word terms in several languages could be used to expand resources built for English with translations in other languages (semi-) automatically. This last point falls outside the scope of this paper.
In the following section we will describe the alignment-based approaches to distributional similarity. In section 3 we will describe the methodology we followed in this paper in detail. We describe our evaluation in section 4 and discuss the results in section 5. Section 6 concludes this paper.
Alignment-based methods
In this section we explain the alignment-based approaches to distributional similarity. We will give some examples of translational context and we will explain how measures serve to determine the similarity of these contexts. We end this section with a discussion of related work.
Translational context
The translational context of a word or a multiword term is the set of translations it gets in other languages. For the acquisition of translations for the Dutch medical terms we rely on automatic word alignment in parallel corpora. Figure 1 illustrates the automatic word alignment between a Dutch and an English phrase as a result of using the IBM alignment models (Brown et al., 1993) implemented in the open-source tool GIZA++ (Och, 2003). The alignment of two texts is bi-directional. The Dutch text is aligned to the English text and vice versa (dotted lines versus continuous lines). The alignment models produced are asymmetric. Several heuristics exist to combine directional word alignments which is usually called "symmetrization". In order to cover multi-word terms standard phrase extraction techniques can be used to move from word alignment to linked phrases (see section 3.2 for more details).
Measures for computing similarity
Translational co-occurrence vectors are used to find distributionally similar words. For ease of reading, we give an example of a single-word term kat in Table 1. In our current setting the terms can be both single-or multi-word terms such as werkzame stof 'active ingredient'. Every cell in the vector refers to a particular translational co-occurrence type. For example, kat 'cat' gets the translation Katze in German. The value of these cells indicate the number of times the cooccurrence type under consideration is found in the corpus.
Each co-occurrence type has a cell frequency. Likewise each head term has a row frequency. The row frequency of a certain head term is the sum of all its cell frequencies. In our example the row frequency for the term kat 'cat' is 65. Cutoffs for cell and row frequency can be applied to discard certain infrequent co-occurrence types or head terms respectively. The more similar the vectors are, the more distributionally similar the head terms are. We need a way to compare the vectors for any two head terms to be able to express the similarity between them by means of a score. Various methods can be used to compute the distributional similarity between terms. We will explain in section 3 what measures we have chosen in the current experiments.
Related work
Multilingual parallel corpora have mostly been used for tasks related to word sense disambiguation such as separation of senses (Resnik and Yarowsky, 1997;Dyvik, 1998;Ide et al., 2002).
However, taking sense separation as a basis, Dyvik (2002) derives relations such as synonymy and hyponymy by applying the method of semantic mirrors. The paper illustrates how the method works. First, different senses are identified on the basis of manual word translations in sentence-aligned Norwegian-English data (2,6 million words in total). Second, senses are grouped in semantic fields. Third, features are assigned on the basis of inheritance. Lastly, semantic relations such synonymy and hyponymy are detected based on intersection and inclusion among feature sets .
Improving the syntax-based approach for synonym identification using bilingual dictionaries has been discussed in Lin et al. (2003) and Wu and Zhou (2003). In the latter parallel corpora are also applied as a reference to assign translation likelihoods to candidates derived from the dictionary. Both of them are limited to single-word terms.
Some researchers employ multilingual corpora for the automatic acquisition of paraphrases (Shimota and Sumita, 2002;Bannard and Callison-Burch, 2005;Callison-Burch, 2008). The last two are based on automatic word alignment as is our approach.
Bannard and Callison-Burch (2005) use a method that is also rooted in phrase-based statistical machine translation. Translation probabilities provide a ranking of candidate paraphrases. These are refined by taking contextual information into account in the form of a language model. The Europarl corpus (Koehn, 2005) is used. It has about 30 million words per language. 46 English phrases are selected as a test set for manual evaluation by two judges. When using automatic alignment, the precision reached without using contextual refinement is 48.9%. A precision of 55.3% is reached when using context information. Manual alignment improves the performance by 26%. A precision score of 55% is attained when using multilingual data.
In a more recent publication Callison-Burch (2008) improved this method by using syntactic constraints and multiple languages in parallel. We have implemented a combination of Bannard and Callison-Burch (2005) and Callison-Burch (2008), in which we use PoS filters instead of syntactic constraints to compare our results with. More details can be found in the Section 5.
Apart from methods that use parallel corpora mono-lingual pattern-based methods have been used to find term variations. Fahmi (2009) acquired term variation for the medical domain using a two-step model. As a first step an initial list of synonyms are extracted using a method adapted from DIPRE (Brin,99). During this step syntactic patterns guide the extraction of candidate terms in the same way as they will guide the extraction in this paper. This first step results in a list of candidate synonyms that are further filtered following a method described in Lin et al. (2003), which uses Web pages as an external source to measure the synonym compatibility hits of each pair. The precision and recall scores presented in Fahmi (2009) are high. We will give results for this method on our test set in Section 5 and refer to it as the pattern-and web-based approach.
Materials and methods
In the following subsections we describe the setup for our experiments.
Data collection
Measures of distributional similarity usually require large amounts of data. For the alignment method we need a parallel corpus of reasonable size with Dutch either as source or as target language coming from the domain we are interested in. Furthermore, we would like to experiment with various languages aligned to Dutch.
The freely available EMEA corpus (Tiedemann, 2009) includes 22 languages in parallel with a reasonable size of about 12-14 million tokens per language. The entire corpus is aligned at the sentence level for all possible combinations of languages. Thus, for acquiring Dutch synonyms we have 21 language pairs with Dutch as the source language. Each language pair includes about 1.1 million sentence pairs. Note that there is a lot of repetition in EMEA and the number of unique sentences (sentence fragments) is much smaller: around 350,000 sentence pairs per language pair with about 6-7 million tokens per language.
Word alignment and phrase extraction
For sentence alignment we applied hunalign (Varga et al., 2005) with the 'realign' function that induces lexical features from the bitext to be combined with length based features. Word alignment has been performed using GIZA++ (Och, 2003). We used standard settings defined in the Moses toolkit (Koehn et al., 2007) to generate Viterbi word alignments of IBM model 4 for sentences not longer than 80 tokens. In order to improve the statistical alignment we used lowercased tokens and lemmas in case we had them available (produced by the Tree-Tagger (Schmid, 1994) and the Alpino parser (van Noord, 2006)).
We used the grow heuristics to combine the asymmetric word alignments which starts with the intersection of the two Viterbi alignments and adds block-neighboring points to it in a second step. In this way we obtain high precision links with some many-to-many alignments. Finally we used the phrase extraction tool from Moses to extract phrase correspondences. Phrases in statistical machine translation are defined as sequences of consecutive words and phrase extraction refers to the exhaustive extraction of all possible phrase pairs that are consistent with the underlying word alignment. Consistency in this case means that words in a legal phrase are only aligned to words in the corresponding phrase and not to any other word outside of that phrase. The extraction mechanism can be restricted by setting a maximum phrase length which is seven in the default settings of Moses. However, we set the maximum phrase length to four, because we do not expect many terms in the medical domain to be longer than 4 words.
As explained above, word alignment is carried out on lowercased and possibly lemmatised versions of the corpus. However, for phrase extraction, we used surface wordforms and extracted them along with the part-of-speech (PoS) tags for Dutch taken from the corresponding Alpino parse trees. This allows us to lowercase all words except the words that have been tagged as name. Furthermore, the inclusion of PoS tags enabled us to filter the resulting phrase table according to typical patterns of multi-word terms. We also removed phrases that consist of only non-alphabetical characters. Note that we rely entirely on automatic processing of our data. Thus, the results from automatic tagging, lemmatisation and word alignment include errors. Bannard and Callison-Burch (2005) show that when using manual alignment the percentage of correct paraphrases significantly rises from 48.9% to 74.9%.
Selecting candidate terms
As we explained above we can select those phrases that are more likely to be good terms by using a regular expression over PoS tags. We apply a pattern using adjectives (A), nouns (NN), names (NM) and prepositions (P) as its components based on Justeson and Katz. (1995) which was adapted to Dutch by Fahmi (2009) To explain this regular expression in words, a candidate term is either a sequence of adjectives and/or nouns and/or names, ending in a noun or name or it consists of two such strings, separated by a single preposition.
After applying the filters and removing all hapaxes we are left with 9.76 M co-occurrences of a Dutch (multi-word) term and a foreign translation.
Comparing vectors
To compare the vectors of the terms we need a similarity measures. We have chosen to describe the functions used in this paper using an extension of the notation used by Lin (1998), adapted by Curran (2003). Co-occurrence data is described as tuples: word, language, word , for example, kat, EN, cat . Asterisks indicate a set of values ranging over all existing values of that component of the relation tuple. For example, (w, * , * ) denotes for a given word w all translational contexts it has been found in in any language. For the example of kat in, this would denote all values for all translational contexts the word is found in: Katze DE:17, chat FR:26 etc. Everything is defined in terms of co-occurrence data with non-zero frequencies. The set of attributes or features for a given corpus is defined as:
(w, * , * ) ≡ {(r, w )|∃(w, r, w )} Each pair yields a frequency value, and the sequence of values is a vector indexed by r:w values, rather than natural numbers. A subscripted asterisk indicates that the variables are bound together:
(w m , * r , * w ) × (w n , * r , * w )
The above refers to a dot product of the vectors for term w m and term w n summing over all the r:w pairs that these two terms have in common. For example we could compare the vectors for kat and some other term by applying the dot product to all bound variables.
We have limited our experiments to using Cosine 2 . We chose this measure, since it performed best in experiments reported in Van der Plas (2008). Cosine is a geometrical measure. It returns the cosine of the angle between the vectors of the words and is calculated as the dot product of the vectors:
Cosine = (W 1, * r , * w ) × (W 2, * r , * w ) (W 1, * , * ) 2 × (W 2, * , * ) 2
If the two words have the same distribution the angle between the vectors is zero.
Post-processing
A well-known problem of phrase-based methods to paraphrase or term variation acquisition is the fact that a large proportion of the term variations or paraphrases proposed by the system are super-or sub-strings of the original term (Callison-Burch, 2008). To remedy this problem we removed all term variations that are either super-or sub-strings of the original term from the lists of candidate term variations output by the system.
Evaluation
There are several evaluation methods available to assess lexico-semantic data. Curran (2003) distinguishes two types of evaluation: direct evaluation and indirect evaluation. Direct evaluation methods compare the semantic relations given by the 2 Feature weights have been used in previous work for syntax-based methods to account for the fact that cooccurrences have different information values. Selectionally weak (Resnik, 1993) or light verbs such as hebben 'to have' have a lower information value than a verb such as uitpersen 'squeeze' that occurs less frequently. Although weights that promote features with a higher information value work very well for syntax-based methods, Van der Plas (2008) showed that weighting only helps to get better synonyms for very infrequent nouns when applied in alignment-based approaches.
In the current setting we do not consider very infrequent terms so we did not use any weighting. system against human performance or expertise. Indirect approaches evaluate the system by measuring its performance on a specific task.
Since we are not aware of a task in which we could test the term variations for the Dutch medical domain and ad-hoc human judgments are time consuming and expensive, we decided to compare against a gold standard. Thereby denying the common knowledge that the drawback of using gold standard evaluations is the fact that gold standards often prove to be incomplete. In previous work on synonym acquisition for the general domain, Van der Plas and Tiedemann (2006) used the synsets in Dutch EuroWordnet (Vossen, 1998) for the evaluation of the proposed synonyms. In an evaluation with human judgments, Van der Plas and Tiedemann (2006) showed that in 37% of the cases the majority of the subjects judged the synonyms proposed by the system to be correct even though they were not found to be synonyms in Dutch EuroWordnet. For evaluating medical term variations in Dutch there are not many gold standards available. Moreover, the gold standards that are available are even less complete than for the general domain.
Gold standard
We have chosen to evaluate the nearest neighbours of the alignment-based method on the term variations from the Elseviers medical encyclopedia which is intended for the general audience containing 379K words. The encyclopedia was made available to us by Spectrum B.V. 3 .
The test set is comprised of 848 medical terms from aambeeld 'incus' to zwezerik 'thymus' and their term variations. About 258 of these entries contain multiword terms. For most of the terms the list from Elseviers medical encyclopedia gives only one term variation, 146 terms have two term variations and only one term has three variations. For each of these medical terms in the test set the system generates a ranked list of term variations that will be evaluated against the term variations in the gold standard.
Results and Discussion
Before we present our results and give a detailed error analysis we would like to remind the reader of the two methods we compare our results with and give some more detail on the implementation of the second method.
Two methods for comparison
The first method is the pattern-and web-based approach described in Fahmi (2009). Note that we did not re-implement the method, so we were not able to run the method on the same corpus we are using in our experiments. The corpus used in Fahmi (2009) is a medical corpus developed in Tilburg University (http://ilk.uvt.nl/rolaquad). It consists of texts from a medical encyclopedia and a medical handbook and contains 57,004 sentences. The system outputs a ranked list of term variation pairs. We selected the top-100 pairs that are output by the system and evaluated these on the test set described in Subsection 4.1. The method is composed of two main steps. In the first step candidate terms are extracted from the corpus using a PoS filter, that is similar to the PoS filter we applied. In the second step pairs of candidate term variations are re-ranked on the basis of information from the Web. Phrasal patterns such as XorY are used to get synonym compatibility hits as opposed to XandY that points to non-synonymous terms.
The second method we compare with is the phrase-based translation method first introduced by Bannard and Callison-Burch (2005). Statistical word alignment can be used to measure the relation between source language items. Here, one makes use of the estimated translation likelihoods of phrases (p(f |e) and p(e|f )) that are used to build translation models in standard phrase-based statistical machine translation systems (Koehn et al., 2007). Bannard and Callison-Burch (2005) define the problem of paraphrasing as the following search problem: e 2 = argmax e 2 :e 2 =e 1 p(e 2 |e 1 ) where p(e 2 |e 1 ) ≈ f p(f |e 1 )p(e 2 |f ) Certainly, for paraphrasing we are not only interested inê 2 but for the top-ranked paraphrase candidates but this essentially does not change the algorithm. In their paper, Bannard and Callison-Burch (2005) also show that systematic errors (usually originating from bad word alignments) can be reduced by summing over several language pairs. e 2 ≈ argmax e 2 :e 2 =e 1
C f C p(f C |e 1 )p(e 2 |f C )
This is the approach that we also adapted for our comparison. The only difference in our implementation is that we applied a PoS-filter to extract candidate terms as explained in section 3.3. In some sense this is a sort of syntactic constraint introduced in Callison-Burch (2008). Furthermore, we set the maximum phrase length to 4 and applied the same post-processing as described in Subsection 3.5 to obtain comparable results. Table 2 shows the results for our method compared with the method adapted from Bannard and Callison-Burch (2005) and the method by Fahmi (2009). Precision and recall are given at several values of k. At k=1, only the top-1 term variations the system proposes are taken into account. At k=3 the top-3 candidate term variations are included in the calculations.
Results
The last column shows the coverage of the system. A coverage of 40% means that for 40% of the 850 terms in the test set one or more term variations are found. Recall is measured for the terms covered by the system.
From Table 2 we can read that the method we propose is able to get about 30% of the term variations right, when only the top-1 candidates are considered. It is able to retrieve roughly a quarter of the term variations provided in the gold standard 4 . If we increase k precision goes down and recall goes up. This is expected, because the system proposes a ranked list of candidate term variations so at higher values of k the quality is lower, but more terms from the gold standard are found. Table 2: Percent precision and recall at several values of k and percent coverage for the method proposed in this paper (plus a version including hapaxes), the method adapted from Bannard and Callison-Burch (2005) and the output of the system proposed by Fahmi (2009) In comparison, the scores resulting from our adapted implementation of Bannard and Callison-Burch (2005) are lower. They do however, manage to find more terms from the test set covering around 48% of the words in the gold standard. This is due to the cut-off that we use when creating the co-occurrence vector to remove unreliable data points. In our approach we discarded hapaxes, whereas for the Bannard and Callison-Burch approach the entire phrase table is used. We therefore ran our system once again without this cut-off. As expected, the coverage went up in that setting -actually to 48% as well. 5 However, we can see that the precision and recall remained higher, than the scores we got with the implementation following Bannard and Callison-Burch (2005). Hence, our vector-based approach seems to outperform the direct use of probabilities from phrase-based MT.
Finally, we also compare our results with the data set extracted using the pattern-and webbased approach from Fahmi (2009). The precision and recall figures of that data set are the highest in our comparison. However, since the coverage of this method is very low (which is not surprising since a smaller corpus is used to get these results) the precision and recall are calculated on the basis of a very small number of examples (35 to be precise). The results are therefore not very reliable. The precision and recall figures presented in Fahmi (2009), however, point in the same direction. To get an idea of the actual coverage of this method we would need to apply this extraction technique to the EMEA corpus. This is especially difficult due to the heavy use of web queries 5 The small difference in coverage is due to some mistakes in tokenisation for our method. which makes it problematic to apply this method to large data sets.
Error analysis
The most important finding we did, when closely inspecting the output of the system is that many of the term variations proposed by the system are not found in the gold standard, but are in fact correct. The scores given in Table 2 are therefore pessimistic and a manual evaluation with domain specialist would certainly give us more realistic and probably much higher scores. We also found some spelling variants which are usually not covered by the gold standard. Look, for instance, at the following examples: astma, asthma ('asthma') atherosclerose, Artherosclerosis ('atherosclerosis') autonoom zenuwstelsel, autonome zenuwstelsel ('autonomic nervous system') Some mistakes could have been avoided using stemming or proper lemmatisation (plurals that are counted as wrong): abortus, zwangerschapsafbrekingen ('abortion') adenoom, adenomen ('adenoma') indigestie, spijsverteringsstoornissen ('indigestion') After removing the previous cases from the data, some of the remaining mistakes are related to the problem we mentioned in section 3.5: Phrase-based methods to paraphrase or term variation acquisition have the tendency to propose term variations that are super-or sub-strings of the original term. We were able to filter out these superor sub-strings, but not in cases where a candidate term is a term variation of a super-or sub-string of the original term. Consider, for example the term bloeddrukverlaging 'blood pressure decrease' and the candidate afname 'decrease', where afname is a synonym for verlaging.
Conclusions
In this article we have shown that translational context together with measures of distributional similarity can be used to extract medical term variations from aligned parallel corpora. Automatic word alignment and phrase extraction techniques from statistical machine translation can be applied to collect translational variations across various languages which are then used to identify semantically related words and phrases. In this study, we additionally apply pattern-based filters using partof-speech labels to focus on particular patterns of single and multi-word terms. Our method outperforms another alignment-based approach measured on a gold standard taken from a medical encyclopedia when applied to the same data set and using the same PoS filter. Precision and recall are still quite poor according to the automatic evaluation. However, manual inspection suggests that many candidates are simply misjudged because of the low coverage of the gold standard data. We are currently setting up a manual evaluation. Altogether our approach provides a promising strategy for the extraction of term variations using straightforward and fully automatic techniques. We believe that our results could be useful for a range of applications and resources and that the approach in general is robust and flexible enough to be applied to various languages and domains.
Figure 1 :
1Example of bidirectional word alignments of two parallel sentences
: ((A|NN|NM)+|(((A|NN|NM) * (NN|NM P)?)(A|NN|NM) * ))NN+
Table 1 :
1Translational co-occurrence vector for kat ('cat') based on four languages
Phrase-based Distr. Sim (hapaxes) 25.4 20.9 20.4 32.1 16.1 36.8 47.8Method
k=1
k=2
k=3
Coverage
P
R
P
R
P
R
Phrase-based Distr. Sim
28.9 22.8 21.8 32.7 17.3 37.2
40.0
Bannard&Callison-Burch (2005)
18.4 15.3 16.9 27.3 13.7 32.3
48.1
Fahmi (2009)
38.2 35.1 37.1 35.1 37.1 35.1
4.0
Here, we give some examples below: arts, dokter ('doctor') ademnood, ademhalingsnood ('respiratory distress') aangezichtsverlamming, gelaatsparalyse ('facial paralysis') alvleesklierkanker, pancreaskanker ('cancer of the pancreas')
Spelling variants are a type of term variations that are not included in the definition of synonymy.
http://www.kiesbeter.nl/medischeinformatie/
Note that a recall of 100% is not possible, because some terms have several term variations.
AcknowledgementsThe research leading to these results has received funding from the EU FP7 programme (FP7/2007(FP7/ -2013 under grant agreement nr 216594 (CLAS-SIC project: www.classic-project.org).
Paraphrasing with bilingual parallel corpora. C Bannard, C Callison-Burch, Proceedings of the annual Meeting of the Association for Computational Linguistics (ACL). the annual Meeting of the Association for Computational Linguistics (ACL)Bannard, C. and C. Callison-Burch. 2005. Paraphras- ing with bilingual parallel corpora. In Proceedings of the annual Meeting of the Association for Com- putational Linguistics (ACL).
Extracting patterns and relations from the World Wide Web. S Brin, WebDB '98: Selected papers from the International Workshop on The World Wide Web and Databases. Brin, S. 99. Extracting patterns and relations from the World Wide Web. In WebDB '98: Selected papers from the International Workshop on The World Wide Web and Databases.
The mathematics of statistical machine translation: Parameter estimation. P F Brown, S A Della Pietra, V J Della Pietra, R L Mercer, Computational Linguistics. 192Brown, P.F., S.A. Della Pietra, V.J. Della Pietra, and R.L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estimation. Compu- tational Linguistics, 19(2):263-296.
Syntactic constraints on paraphrases extracted from parallel corpora. C Callison-Burch, Proceedings of EMNLP. EMNLPCallison-Burch, C. 2008. Syntactic constraints on paraphrases extracted from parallel corpora. In Pro- ceedings of EMNLP.
From Distributional to Semantic Similarity. J R Curran, University of EdinburghPh.D. thesisCurran, J.R. 2003. From Distributional to Semantic Similarity. Ph.D. thesis, University of Edinburgh.
Translations as semantic mirrors. H Dyvik, Proceedings of Workshop Multilinguality in the Lexicon II (ECAI). Workshop Multilinguality in the Lexicon II (ECAI)Dyvik, H. 1998. Translations as semantic mirrors. In Proceedings of Workshop Multilinguality in the Lexicon II (ECAI).
Translations as semantic mirrors: from parallel corpus to wordnet. Language and Computers. H Dyvik, Advances in Corpus Linguistics. Papers from the 23rd International Conference on English Language Research on Computerized Corpora (ICAME 23). 16Dyvik, H. 2002. Translations as semantic mirrors: from parallel corpus to wordnet. Language and Computers, Advances in Corpus Linguistics. Pa- pers from the 23rd International Conference on En- glish Language Research on Computerized Corpora (ICAME 23), 16:311-326.
Automatic Term and Relation Extraction for Medical Question Answering System. I Fahmi, University of GroningenPh.D. thesisFahmi, I. 2009. Automatic Term and Relation Extrac- tion for Medical Question Answering System. Ph.D. thesis, University of Groningen.
A synopsis of linguistic theory 1930-1955. J R Firth, Studies in Linguistic Analysis. Philological Society)Firth, J.R. 1957. A synopsis of linguistic theory 1930- 1955. Studies in Linguistic Analysis (special vol- ume of the Philological Society), pages 1-32.
Mathematical structures of language. Z S Harris, WileyHarris, Z.S. 1968. Mathematical structures of lan- guage. Wiley.
Sense discrimination with parallel corpora. N Ide, T Erjavec, D Tufis, Proceedings of the ACL Workshop on Sense Disambiguation: Recent Successes and Future Directions. the ACL Workshop on Sense Disambiguation: Recent Successes and Future DirectionsIde, N., T. Erjavec, and D. Tufis. 2002. Sense discrim- ination with parallel corpora. In Proceedings of the ACL Workshop on Sense Disambiguation: Recent Successes and Future Directions.
Technical terminology: some linguistic properties and an algorithm for identification in text. J Justeson, S Katz, Natural Language Engineering. 1Justeson, J. and S. Katz. 1995. Technical terminol- ogy: some linguistic properties and an algorithm for identification in text. Natural Language Engineer- ing, 1:9-27.
Moses: Open source toolkit for statistical machine translation. P Koehn, H Hoang, A Birch, C Callison-Burch, M Federico, N Bertoldi, B Cowan, W Shen, C Moran, R Zens, C Dyer, O Bojar, A Constantin, E Herbst, Proceedings of the Annual Meeting of the Association for Computational Linguistics. the Annual Meeting of the Association for Computational LinguisticsKoehn, P., H. Hoang, A. Birch, C. Callison-Burch, M.Federico, N. Bertoldi, B. Cowan, W. Shen, C. Moran, R. Zens, C. Dyer, O. Bojar, A.Constantin, and E. Herbst. 2007. Moses: Open source toolkit for statistical machine translation. In Proceedings of the Annual Meeting of the Association for Com- putational Linguistics.
Europarl: A parallel corpus for statistical machine translation. P Koehn, Proceedings of the MT Summit. the MT SummitPhuket, ThailandKoehn, P. 2005. Europarl: A parallel corpus for statis- tical machine translation. In Proceedings of the MT Summit, pages 79-86, Phuket, Thailand.
Identifying synonyms among distributionally similar words. D Lin, S Zhao, L Qin, M Zhou, Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI). the International Joint Conference on Artificial Intelligence (IJCAI)Lin, D., S. Zhao, L. Qin, and M. Zhou. 2003. Identify- ing synonyms among distributionally similar words. In Proceedings of the International Joint Confer- ence on Artificial Intelligence (IJCAI).
Automatic retrieval and clustering of similar words. D Lin, Proceedings of COLING/ACL. COLING/ACLLin, D. 1998. Automatic retrieval and clustering of similar words. In Proceedings of COLING/ACL.
The scope and structure of the first version of the umls semantic network. A Mccray, W Hole, Symposium on Computer Applications in Primary Care (SCAMC-90). Washington DCIEEE Computer SocietyMcCray, A. and W. Hole. 1990. The scope and struc- ture of the first version of the umls semantic net- work. In Symposium on Computer Applications in Primary Care (SCAMC-90), IEEE Computer Soci- ety, pages 126-130, , Washington DC, IEEE Com- puter Society. 126-130.
GIZA++: Training of statistical translation models. F J Och, Och, F.J. 2003. GIZA++: Training of sta- tistical translation models. Available from http://www.isi.edu/˜och/GIZA++.html.
A perspective on word sense disambiguation methods and their evaluation. P Resnik, D Yarowsky, Proceedings of ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, what, and how. ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, what, and howResnik, P. and D. Yarowsky. 1997. A perspective on word sense disambiguation methods and their eval- uation. In Proceedings of ACL SIGLEX Workshop on Tagging Text with Lexical Semantics: Why, what, and how?
Selection and information. Unpublished doctoral thesis. P Resnik, University of PennsylvaniaResnik, P. 1993. Selection and information. Unpub- lished doctoral thesis, University of Pennsylvania.
Probabilistic part-ofspeech tagging using decision trees. Helmut Schmid, Proceedings of International Conference on New Methods in Language Processing. International Conference on New Methods in Language ProcessingManchester, UKSchmid, Helmut. 1994. Probabilistic part-of- speech tagging using decision trees. In Pro- ceedings of International Conference on New Methods in Language Processing, pages 44-49, Manchester, UK, September. http://www.ims.uni- stuttgart.de/˜schmid/.
Dimensions of meaning. H Schütze, Proceedings of the ACM/IEEE conference on Supercomputing. the ACM/IEEE conference on SupercomputingSchütze, H. 1992. Dimensions of meaning. In Pro- ceedings of the ACM/IEEE conference on Super- computing.
Automatic paraphrasing based on parallel corpus for normalization. M Shimota, E Sumita, Proceedings of the International Conference on Language Resources and Evaluation (LREC). the International Conference on Language Resources and Evaluation (LREC)Shimota, M. and E. Sumita. 2002. Automatic para- phrasing based on parallel corpus for normalization. In Proceedings of the International Conference on Language Resources and Evaluation (LREC).
News from OPUS -A collection of multilingual parallel corpora with tools and interfaces. Jörg Tiedemann, Recent Advances in Natural Language Processing. Nicolov, N., K. Bontcheva, G. Angelova, and R. MitkovBorovets, BulgariaJohn BenjaminsVTiedemann, Jörg. 2009. News from OPUS -A collec- tion of multilingual parallel corpora with tools and interfaces. In Nicolov, N., K. Bontcheva, G. An- gelova, and R. Mitkov, editors, Recent Advances in Natural Language Processing, volume V, pages 237-248, Borovets, Bulgaria. John Benjamins, Am- sterdam/Philadelphia.
Finding synonyms using automatic word alignment and measures of distributional similarity. L Van Der Plas, J Tiedemann, Proceedings of COLING/ACL. COLING/ACLvan der Plas, L. and J. Tiedemann. 2006. Finding syn- onyms using automatic word alignment and mea- sures of distributional similarity. In Proceedings of COLING/ACL.
Automatic lexico-semantic acquisition for question answering. Groningen dissertations in linguistics. Plas Van Der, van der Plas. 2008. Automatic lexico-semantic acqui- sition for question answering. Groningen disserta- tions in linguistics.
At last parsing is now operational. G Van Noord, Actes de la 13eme Conference sur le Traitement Automatique des Langues Naturelles. s de la 13eme Conference sur le Traitement Automatique des Langues Naturellesvan Noord, G. 2006. At last parsing is now oper- ational. In Actes de la 13eme Conference sur le Traitement Automatique des Langues Naturelles.
Parallel corpora for medium density languages. D Varga, L Nmeth, P Halcsy, A Kornai, V Trn, V Nagy, Proceedings of RANLP 2005. RANLP 2005Varga, D., L. Nmeth, P. Halcsy, A. Kornai, V. Trn, and V. Nagy. 2005. Parallel corpora for medium density languages. In Proceedings of RANLP 2005, pages 590-596.
EuroWordNet a multilingual database with lexical semantic networks. P Vossen, Vossen, P. 1998. EuroWordNet a multilingual database with lexical semantic networks.
Providing machine tractable dictionary tools. Y Wilks, D Fass, . M Ch, J E Guo, B M Mcdonald, T Slator, Plate, Machine Translation. 52Wilks, Y., D. Fass, Ch. M. Guo, J. E. McDonald, and B. M. Slator T. Plate. 1993. Providing ma- chine tractable dictionary tools. Machine Transla- tion, 5(2):99-154.
Optimizing synonym extraction using monolingual and bilingual resources. H Wu, M Zhou, Proceedings of the International Workshop on Paraphrasing: Paraphrase Acquisition and Applications (IWP). the International Workshop on Paraphrasing: Paraphrase Acquisition and Applications (IWP)Wu, H. and M. Zhou. 2003. Optimizing synonym ex- traction using monolingual and bilingual resources. In Proceedings of the International Workshop on Paraphrasing: Paraphrase Acquisition and Appli- cations (IWP). |
11,970,767 | Parsing Without (Much) Phrase Structure | Approaches to NL syntax conform in varying degrees to the older relational/dependency model, (essentially that assumed in traditional grammar), which treats a sentence as a group of words united by various relations, and the newer constituent model. Some modern approaches have nonetheless involved shifts away from essentially constituent-based models of the sort associated with Bloomfield and Chomsky to more relation-based ones (e.g. case grammar, relational grammar, daughter-dependency and word grammar, corepresentational grammar) while some others, notably lexical-functional grammar, have nonetheless continued to rely crucially on certain techniques inherited from constituency-based grammar, particularly context-free grammar. In computational linguistics there is a strong (if not universal) reliance on phrase structure as the medium via which to represent syntactic structure; call this the CONSENSUS VIEW. A significant amount of effort has accordingly been invested in techniques by which to build such a representation efficiently, which has in turn led to considerable work on the formal and computational properties of context-free gramamrs (or natural extensions of them) and of the associated languages. In its strongest form, the consensus view says that the recovery of a fully specified parse tree is an essential step in computational language processing, and would, if correct, provide important support for the constituent model. In this paper, we shall critically examine the rationale for this view, and will sketch (informally) an alternative view which we find more defensible. The actual position we shall take for this discussion, however, is conservative in that we will not argue that there is no place whatever for constituent analysis in parsing or in syntactic analysis generally. What we WILL argue is that phrase structure is at least partly redundant in that a direct leap to the composition of some semantic units is possible from a relatively underspecified syntactic representation (as opposed to a complete parse tree). However, see Rindflesch forthcoming for an approach to.parsing which entails a much stronger denial of the consensus view.The rationale for the consensus view consists of four main propositions: (i) phrase structure analysis is well entrenched in both 'pure' and computational linguistics; (ii) phrase structure grammars are well understood mathematically; (iii) context-free languages are provably computationally tractable; (iv) semantic processing is either impossible, or at best highly nonoptimal, without a complete parse tree to work with (with the possible qualification that syntactic and semantic processing might be interleaved). We will focus on (ii-iv), (i) amounting to nothing more than the identification as such of the consensus view. Argument (ii) is compelling if, but only if, one is willing to grant certain other assumptions. Since these include the points at issue, namely that phrase structure analysis is in principle adequate to the task at hand, the argument is circular taken by itself. With regard to (iii), note that even if NL's (or large tracts of them) are context-free, that is SUFFICIENT tO assure that they are computationally tractable, but not NECESSARY. That is, the tractability of a language or sublanguage implies nothing with regard to context-freeness. 1 Argument (iv) amounts to saying that the composition of a given semantic unit can be identified only after the corresponding syntactic constituent has been parsed, but this is false. It is 156 possible, both in principle and in fact, to recognize at least some semantic units by operating on an 'impoverished' syntactic representation, i.e. one which does not yet incorporate any information about the syntactic constituents corresponding to the units in question. P2] The goal in all four eases is to identify a nonlexical predicate consisting of likes and Mary and a predication consisting of John and the afore-mentioned nonlexical predicate. In 3-4, this predication must also be analyzed as a component of a larger one. Under the consensus view, this would require identification of constituents of the categories VP or VP/NP prior to recognition of nonlexical predicates, and the identification of constituents of the categories S or S/NP prior to the recognition of predications. But given just the amount of structure in the schemata shown in 1'-4', we can proceed directly to the semantic units as follows. Assuming that processing starts at the left: (a) in a sequence of the form NP 1 NP 2 P, leave NP 2 unlabelled; (b) in a sequence of the form NP P, label the NP as Subject of the P; (c) if no NP appears to the right of a P requiring an NP Object, associate this function with the nearest unlabelled NP to the left.We illustrate with 4. In either case, at the conclusion of the first pass, the predication corresponding to the subordinate clause is fully specified and at least the Subject of the predication corresponding to the main clause is identified. On the second pass, it suffices to search for P's requiring Object complements and to assign this function to any predication whose own P lies to the right of such a predicate. (Discontinuity poses no difficulties, nor is it necessary to make use of auxiliary devices such as empty categories to mark the positions of syntactic gaps.) Further, once a transitive P and its Object have been identified, these may be composed into a larger intransitive predicate.A second instructive example is provided by the problematical Dutch constructions discussed inBresnan et al. 1982. The problem, briefly put, is that there is a class of VP's in Dutch which take the form NP n-1 V n but which cannot, apparently, be assigned a center-embedding constituent structure. Using a lexical-functional framework, the authors show that constraints on f-structure can be used as a filter on c-structure which are generable by the (context-free) phrase structure component of the grammar. If one applies this conception seriously to parsing, then it follows that what the parser must construct is functionally annotated parse trees, and yet it is not difficult to see how the functional information could be used, much as it was in the earlier example, to bypass at least some of the steps involved in conslxucting a c-structure. As an example, consider ... dat Jan Pier Marie zag helpen zwemmen 'that John saw Piet help Marie to swim'. One way to look at the problem would be this: imagine that there is a recursive way of constructing complex verbs out of simple verbs such that the complex inherits the arguments of the simplexes, and that the arguments of the complex must appear in a linear order corresponding to the order of the simplexes with which they are associated.Imagine ful'ther that it is possible to | [] | Parsing Without (Much) Phrase Structure
Michael B Kac
Department of Linguistics
Program in Linguistics
University of Minnesota Minneapolis
55455MNUSA
Alexis Manaster-Ramer
University of Michigan
M148109Ann ArborUSA
Parsing Without (Much) Phrase Structure
The following sentences are offered by way of illustration: [1. John likes Mary 2. Mary, John likes 3. I think John likes Mary 4. Mary, 1 think John likes ] In these examples, where all the NP's are single words, it is a trivial matter to assign each to one of the following
Approaches to NL syntax conform in varying degrees to the older relational/dependency model, (essentially that assumed in traditional grammar), which treats a sentence as a group of words united by various relations, and the newer constituent model. Some modern approaches have nonetheless involved shifts away from essentially constituent-based models of the sort associated with Bloomfield and Chomsky to more relation-based ones (e.g. case grammar, relational grammar, daughter-dependency and word grammar, corepresentational grammar) while some others, notably lexical-functional grammar, have nonetheless continued to rely crucially on certain techniques inherited from constituency-based grammar, particularly context-free grammar. In computational linguistics there is a strong (if not universal) reliance on phrase structure as the medium via which to represent syntactic structure; call this the CONSENSUS VIEW. A significant amount of effort has accordingly been invested in techniques by which to build such a representation efficiently, which has in turn led to considerable work on the formal and computational properties of context-free gramamrs (or natural extensions of them) and of the associated languages. In its strongest form, the consensus view says that the recovery of a fully specified parse tree is an essential step in computational language processing, and would, if correct, provide important support for the constituent model. In this paper, we shall critically examine the rationale for this view, and will sketch (informally) an alternative view which we find more defensible. The actual position we shall take for this discussion, however, is conservative in that we will not argue that there is no place whatever for constituent analysis in parsing or in syntactic analysis generally. What we WILL argue is that phrase structure is at least partly redundant in that a direct leap to the composition of some semantic units is possible from a relatively underspecified syntactic representation (as opposed to a complete parse tree). However, see Rindflesch forthcoming for an approach to.parsing which entails a much stronger denial of the consensus view.The rationale for the consensus view consists of four main propositions: (i) phrase structure analysis is well entrenched in both 'pure' and computational linguistics; (ii) phrase structure grammars are well understood mathematically; (iii) context-free languages are provably computationally tractable; (iv) semantic processing is either impossible, or at best highly nonoptimal, without a complete parse tree to work with (with the possible qualification that syntactic and semantic processing might be interleaved). We will focus on (ii-iv), (i) amounting to nothing more than the identification as such of the consensus view. Argument (ii) is compelling if, but only if, one is willing to grant certain other assumptions. Since these include the points at issue, namely that phrase structure analysis is in principle adequate to the task at hand, the argument is circular taken by itself. With regard to (iii), note that even if NL's (or large tracts of them) are context-free, that is SUFFICIENT tO assure that they are computationally tractable, but not NECESSARY. That is, the tractability of a language or sublanguage implies nothing with regard to context-freeness. 1 Argument (iv) amounts to saying that the composition of a given semantic unit can be identified only after the corresponding syntactic constituent has been parsed, but this is false. It is 156 possible, both in principle and in fact, to recognize at least some semantic units by operating on an 'impoverished' syntactic representation, i.e. one which does not yet incorporate any information about the syntactic constituents corresponding to the units in question. P2] The goal in all four eases is to identify a nonlexical predicate consisting of likes and Mary and a predication consisting of John and the afore-mentioned nonlexical predicate. In 3-4, this predication must also be analyzed as a component of a larger one. Under the consensus view, this would require identification of constituents of the categories VP or VP/NP prior to recognition of nonlexical predicates, and the identification of constituents of the categories S or S/NP prior to the recognition of predications. But given just the amount of structure in the schemata shown in 1'-4', we can proceed directly to the semantic units as follows. Assuming that processing starts at the left: (a) in a sequence of the form NP 1 NP 2 P, leave NP 2 unlabelled; (b) in a sequence of the form NP P, label the NP as Subject of the P; (c) if no NP appears to the right of a P requiring an NP Object, associate this function with the nearest unlabelled NP to the left.We illustrate with 4. In either case, at the conclusion of the first pass, the predication corresponding to the subordinate clause is fully specified and at least the Subject of the predication corresponding to the main clause is identified. On the second pass, it suffices to search for P's requiring Object complements and to assign this function to any predication whose own P lies to the right of such a predicate. (Discontinuity poses no difficulties, nor is it necessary to make use of auxiliary devices such as empty categories to mark the positions of syntactic gaps.) Further, once a transitive P and its Object have been identified, these may be composed into a larger intransitive predicate.A second instructive example is provided by the problematical Dutch constructions discussed inBresnan et al. 1982. The problem, briefly put, is that there is a class of VP's in Dutch which take the form NP n-1 V n but which cannot, apparently, be assigned a center-embedding constituent structure. Using a lexical-functional framework, the authors show that constraints on f-structure can be used as a filter on c-structure which are generable by the (context-free) phrase structure component of the grammar. If one applies this conception seriously to parsing, then it follows that what the parser must construct is functionally annotated parse trees, and yet it is not difficult to see how the functional information could be used, much as it was in the earlier example, to bypass at least some of the steps involved in conslxucting a c-structure. As an example, consider ... dat Jan Pier Marie zag helpen zwemmen 'that John saw Piet help Marie to swim'. One way to look at the problem would be this: imagine that there is a recursive way of constructing complex verbs out of simple verbs such that the complex inherits the arguments of the simplexes, and that the arguments of the complex must appear in a linear order corresponding to the order of the simplexes with which they are associated.Imagine ful'ther that it is possible to
Approaches to NL syntax conform in varying degrees to the older relational/dependency model, (essentially that assumed in traditional grammar), which treats a sentence as a group of words united by various relations, and the newer constituent model. Some modern approaches have nonetheless involved shifts away from essentially constituent-based models of the sort associated with Bloomfield and Chomsky to more relation-based ones (e.g. case grammar, relational grammar, daughter-dependency and word grammar, corepresentational grammar) while some others, notably lexical-functional grammar, have nonetheless continued to rely crucially on certain techniques inherited from constituency-based grammar, particularly context-free grammar. In computational linguistics there is a strong (if not universal) reliance on phrase structure as the medium via which to represent syntactic structure; call this the CONSENSUS VIEW. A significant amount of effort has accordingly been invested in techniques by which to build such a representation efficiently, which has in turn led to considerable work on the formal and computational properties of context-free gramamrs (or natural extensions of them) and of the associated languages. In its strongest form, the consensus view says that the recovery of a fully specified parse tree is an essential step in computational language processing, and would, if correct, provide important support for the constituent model. In this paper, we shall critically examine the rationale for this view, and will sketch (informally) an alternative view which we find more defensible. The actual position we shall take for this discussion, however, is conservative in that we will not argue that there is no place whatever for constituent analysis in parsing or in syntactic analysis generally. What we WILL argue is that phrase structure is at least partly redundant in that a direct leap to the composition of some semantic units is possible from a relatively underspecified syntactic representation (as opposed to a complete parse tree). However, see Rindflesch forthcoming for an approach to.parsing which entails a much stronger denial of the consensus view.
The rationale for the consensus view consists of four main propositions: (i) phrase structure analysis is well entrenched in both 'pure' and computational linguistics; (ii) phrase structure grammars are well understood mathematically; (iii) context-free languages are provably computationally tractable; (iv) semantic processing is either impossible, or at best highly nonoptimal, without a complete parse tree to work with (with the possible qualification that syntactic and semantic processing might be interleaved). We will focus on (ii-iv), (i) amounting to nothing more than the identification as such of the consensus view. Argument (ii) is compelling if, but only if, one is willing to grant certain other assumptions. Since these include the points at issue, namely that phrase structure analysis is in principle adequate to the task at hand, the argument is circular taken by itself. With regard to (iii), note that even if NL's (or large tracts of them) are context-free, that is SUFFICIENT tO assure that they are computationally tractable, but not NECESSARY. That is, the tractability of a language or sublanguage implies nothing with regard to context-freeness. 1 Argument (iv) amounts to saying that the composition of a given semantic unit can be identified only after the corresponding syntactic constituent has been parsed, but this is false. It is where all the NP's are single words, it is a trivial matter to assign each to one of the following schemata: [1'. NP 1 P NP 2 2'. NP 1NP 2P 3'. NP 1P1 NP2P2NP3 4'. NP 1 NP 2P1NP3 P2] The goal in all four eases is to identify a nonlexical predicate consisting of likes and Mary and a predication consisting of John and the afore-mentioned nonlexical predicate. In 3-4, this predication must also be analyzed as a component of a larger one. Under the consensus view, this would require identification of constituents of the categories VP or VP/NP prior to recognition of nonlexical predicates, and the identification of constituents of the categories S or S/NP prior to the recognition of predications. But given just the amount of structure in the schemata shown in 1'-4', we can proceed directly to the semantic units as follows. Assuming that processing starts at the left: (a) in a sequence of the form NP 1 NP 2 P, leave NP 2 unlabelled; (b) in a sequence of the form NP P, label the NP as Subject of the P; (c) if no NP appears to the right of a P requiring an NP Object, associate this function with the nearest unlabelled NP to the left.
We illustrate with 4. In either case, at the conclusion of the first pass, the predication corresponding to the subordinate clause is fully specified and at least the Subject of the predication corresponding to the main clause is identified. On the second pass, it suffices to search for P's requiring Object complements and to assign this function to any predication whose own P lies to the right of such a predicate. (Discontinuity poses no difficulties, nor is it necessary to make use of auxiliary devices such as empty categories to mark the positions of syntactic gaps.) Further, once a transitive P and its Object have been identified, these may be composed into a larger intransitive predicate.
A second instructive example is provided by the problematical Dutch constructions discussed in Bresnan et al. 1982. The problem, briefly put, is that there is a class of VP's in Dutch which take the form NP n-1 V n but which cannot, apparently, be assigned a center-embedding constituent structure. Using a lexical-functional framework, the authors show that constraints on f-structure can be used as a filter on c-structure which are generable by the (context-free) phrase structure component of the grammar. If one applies this conception seriously to parsing, then it follows that what the parser must construct is functionally annotated parse trees, and yet it is not difficult to see how the functional information could be used, much as it was in the earlier example, to bypass at least some of the steps involved in conslxucting a c-structure. As an example, consider ... dat Jan Pier Marie zag helpen zwemmen 'that John saw Piet help Marie to swim'. One way to look at the problem would be this: imagine that there is a recursive way of constructing complex verbs out of simple verbs such that the complex inherits the arguments of the simplexes, and that the arguments of the complex must appear in a linear order corresponding to the order of the simplexes with which they are associated.
Imagine ful'ther that it is possible to have rules like [ 5. VP -> V" V; 6. V" -> NP^n V' (UP n OBJ DOWN)] Given a stxing of Object NP's, we would have each of them beat" a different relation to the complex verb: the leftmost would be lOB J, the next leftmost 2OBJ, etc. There is now no difficulty coming up with a way to capture the generalization that 1OBJ is the OBJ of the first simplex verb, 2OBJ the OBJ oi! the second and so on. In regard to parsing, we can now see that as long as there is a way to build up a complex V (we maintain a neutral stance as to how that might be done), then tile compos:,~tion of the semantic unit corresponding to the VP referred to in rule 5--and the relations which obtain within it.--can be recovered without actually building the VP constituent of the c-structure. As long as there is a way, somehow, to build up as much structure as is represented in the schema NP NP NP [V' V V] V then the following will yield the desired results: (a) leave the initial NP unlabelled on the first pass; (b) for all n _> 2, label the n th NP n -1OBJ
of V n,
In the example under discussion, this will make Piet and Marie respectively IOBJ and 2OBJ of the V' zag..helpen. The entire predicate can then be identified by composing the fightmost V with the expression consisting of the V' and its arguments; by the same token, the pairings of arguments of the V' with the appropriate daughter V's is easily accomplished. The end result is the recognition of all the f-structures which have to be extracted f?om the string without prior recognition of either the V" or VP constituents referred to in the rules (5-6).
Our examples are simplified in one respect, namely that they involve no NP's longer than a single word. It is possible that something mole like phrase structure analysis is required to handle such units (as well as the V' referred to in the analysis of the Dutch example), though Rindflesch (forthcoming) argues that this is not the case. (See also Hudson 1976Hudson , 1984 Up to this point, we have been concerned with showing that: the case FOR the consensus view is not especially compelling; we now proceed to the arguments AGAINST it. The illustration just given actually amounts to an m'gument against since it shows that tile S-or S/NP -mid VP or VP/-NP constituents of a parse trex: am inessential to cue the recognition of predications and nonlexical predicates.
The arguments up to this point have been concerned with the output of a syntactic parser; it needs to be noted as well that there are some difficulties associated with the idea that a parser operates with a set of phrase structure rules, or formally similar objects. In Kac et al. 1986, it is argued that there are advantages to parsing in a series of graded stages such that at each stage onlay a particular, restricted type of structural infornlation (e.g. information about relations of subordination among verb!;) is being sought. A variety of different types of information are 'compacted' into phrase structure grammars in a way which makes 'it difficult to isolate a given type and operate with it independently of the others. While there is nothing in principle to prevent this information from being extracted from a set of PS-rules, the overhead imposed by the interpretation process makes this an unatn'aetive option. A preferable stragegy would be to have a highly structured grammar for the parser to refer to, with a hierarchy of different types of information corresponding to the various phases via which the entire structural representation is built up.
We offer one last example which suggests strongly that phrase structure analysis is problematical in some cases. Consider the coordinate sentence John likes and Bill hates beans. One immediate observation that we can make is that the sequence Bill hates beans would, in isolation, be a sentence, which might in turn lead us to an analysis which, whatever else it might entail, would treat the material to the right of and as an S, coordinated by the and to some constituent occurring to the left of the conjunction. An obvious difficulty which stands in the way of this conclusion is that there does not appear prima facie to be any way to treat anything to the left of the and as an S, thereby violating the widely assumed principle that only like constituents can be coordinated (the principle of 'categorial harmony'). Four alternatives thus present themselves: abandon the analysis in favor of one in which the right-conjunct belongs to a category other than S; abandon the principle of categorial harmony; modify the principle of categorial harmony; find some way of analyzing the material to tile left of and as an S.
The first alternative looks initially most attractive, especially when seen in the light of the approach to categories originally proposed by Gazdar (1982) (Sag et al. 1985) since the GPSG treatment of coordinability depends crucially on the categorical impossibility of coordinating X with X/Y (Gazdar 1981).
What we have said so far should be enough to make it clear that finding a way to analyze an example like the one under discussion in phrase structure terms is not as straightforward a matter as it might first have appeared to be. It is conceivable that ways can be found around the difficulties we have mentioned, though one might reaonably ask whether the effort would be of genuine interest or whether it would be more in the nature of a holding action. It is, in any case, possible to handle examples like the ones under discussion in a straightforward manner without attempting a phrase slructure analysis (Kac 1985).
Summary: 1. The rationale for phrase structure analysis is uncompelling on both computational and linguistic grounds.
2.
A fully specified parse tree is partially redundant insofar as structural cues for the recovery of semantic information ate concerned.
3. Phrase structure rules and allied formalisms do not provide the optimal way of representing the grammatical information on which a parser depends.
4. Phrase structure analysis is problematical in certain cases.
'['hese facts imply that alternatives to the consensus view deserve to be investigated. Note i. There is a deeper difficulty here, namely the presumption that NL's must be eomputationally tractable. There is, to our knowledge, no evidence that this is the case. While it is undeniable that humans parse rapidly and effortlessly most of the time, nothing follows from this fact regarding the computational properties of any NL taken as a whole. At most, it shows an understandable predisposition to communicate via easily parsed structures.
possible, both in principle and in fact, to recognize at least some semantic units by operating on an 'impoverished' syntactic representation, i.e. one which does not yet incorporate any information about the syntactic constituents corresponding to the units in question. The following sentences are offered by way of illustration: [1. John likes Mary 2. Mary, John likes 3. I think John likes Mary 4. Mary, 1 think John likes ] In these examples,
quite another to let the phrase structure analysis he dictated by phonological considerations (in which case the predictions are self-fulfilling). There is a more serious difficulty, however, namely that while there is indeed a break after hates, it is not the major break (which comes directly after likes) despite die fact that the analysis places the major syntactic boundary at this point. Full consistency with the phonological facts would require asyntactic analysis like [S[S/NP John likes] and [S [Bill hates] beans]]We would then run into problems with the categories, however, since we would again have coordination of unlike constituents. Note, moreover, that it would not be possible to subsume S and S/NP by an 'archicategory'and other expositions of GPSG. We could thus
analyze the example as having the smlcture [S[S/NP [S/NP John
likes] and [S/NP Bill hates] bean@ Part of the justification for
this analysis is tile presence of an intonation break directly after
hates that is not present when Bill hates beans is present in
isolation. This move, however, creates two new problems. First
of all, it involves a possibly unwarranted intrusion of phonology
into syntax. It is one thing to argue that a phrase structure analysis
with purely syntactic motivation serves as an accurate predictor of
where intonation breaks will fall,
Cross-serial dependencies in Dutch. J Bresnan, R M Kaplan, S Peters, A Zaenen, Linguistic Inquiry. 13Bresnan, J., R.M. Kaplan, S. Peters, and A. Zaenen. 1982. Cross-serial dependencies in Dutch. Linguistic Inquiry 13.613-635.
Unbounded dependencies and coordinate structure. G Gazdar, Linguistic Inquiry. 12Gazdar, G. 1981. Unbounded dependencies and coordinate structure. Linguistic Inquiry 12.155-184.
Phrase structure grammar. The Nature of Syntactic Representation. Dordrecht: Reidel. P. Jacobson and G.K. Pullum eds---1982. Phrase structure grammar. In P. Jacobson and G.K. Pullum eds., The Nature of Syntactic Representation. Dordrecht: Reidel. 131-186.
Arguments for a Nontransformational Grammar. R A Hudson, University of Chicago PressWord Grammar. Oxford: Basil BlaekwellHudson, R.A. 1976. Arguments for a Nontransformational Grammar. Chicago and London: University of Chicago Press. ---1984. Word Grammar. Oxford: Basil Blaekwell.
Constraints on predicate coordination. M B Kac, K L Rindflesch, Ryan, Indiana University Linguistics Club. Bloomington, INReconnaissance-attack Parsing. This volumeKac, M.B. 1985. Constraints on predicate coordination. Bloomington, IN: Indiana University Linguistics Club. ---, T. Rindflesch and K.L. Ryan. 1986. Reconnaissance-attack Parsing. This volume.
Doctoral dissertation in preparation. T Rindflesch, Forthcoming, University of MinnesotaRindflesch, T. forthcoming. Doctoral dissertation in preparation, University of Minnesota.
Coordination and how to distinguish categories. I Sag, G Gazdar, T Wasow, S Weisler, Natural Language and Linguistic Theory. 3Sag, I., G. Gazdar, T. Wasow and S. Weisler. 1985. Coordination and how to distinguish categories. Natural Language and Linguistic Theory 3.117-172. |
8,400,322 | Grammar Factorization by Tree Decomposition | We describe the application of the graph-theoretic property known as treewidth to the problem of finding efficient parsing algorithms. This method, similar to the junction tree algorithm used in graphical models for machine learning, allows automatic discovery of efficient algorithms such as the O(n 4 ) algorithm for bilexical grammars of Eisner and Satta. We examine the complexity of applying this method to parsing algorithms for general Linear Context-Free Rewriting Systems. We show that any polynomial-time algorithm for this problem would imply an improved approximation algorithm for the well-studied treewidth problem on general graphs. | [
15128029,
333410,
9004962,
6166308,
478500,
989542,
912349,
1878772,
5421301,
12273076,
538616,
7859387,
471453,
17143000
] | Grammar Factorization by Tree Decomposition
Daniel Gildea
University of Rochester
Grammar Factorization by Tree Decomposition
We describe the application of the graph-theoretic property known as treewidth to the problem of finding efficient parsing algorithms. This method, similar to the junction tree algorithm used in graphical models for machine learning, allows automatic discovery of efficient algorithms such as the O(n 4 ) algorithm for bilexical grammars of Eisner and Satta. We examine the complexity of applying this method to parsing algorithms for general Linear Context-Free Rewriting Systems. We show that any polynomial-time algorithm for this problem would imply an improved approximation algorithm for the well-studied treewidth problem on general graphs.
Introduction
In this article, we describe meta-algorithms for parsing: algorithms for finding the optimal parsing algorithm for a given grammar, with the constraint that rules in the grammar are considered independently of one another. In order to have a common representation for our algorithms to work with, we represent parsing algorithms as weighted deduction systems (Shieber, Schabes, and Pereira 1995;Goodman 1999;Nederhof 2003). Weighted deduction systems consist of axioms and rules for building items or partial results. Items are identified by square brackets, with their weights written to the left. Figure 1 shows a rule for deducing a new item when parsing a context free grammar (CFG) with the rule S → A B. The item below the line, called the consequent, can be derived if the two items above the line, called the antecedents, have been derived. Items have types, corresponding to grammar nonterminals in this example, and variables, whose values range over positions in the string to be parsed. We restrict ourselves to items containing position variables directly as arguments; no other functions or operations are allowed to apply to variables. The consequent's weight is the product of the weights of the two antecedents and the rule weight w 0 . Implicit in the notation is the fact that we take the maximum weight over all derivations of the same item. Thus, the weighted deduction system corresponds to the Viterbi or max-product algorithm for parsing. Applications of the same weighted deduction system with other semirings are also possible (Goodman 1999).
The computational complexity of parsing depends on the total number of instantiations of variables in the system's deduction rules. If the total number of instantiations is M, parsing is O(M) if there are no cyclic dependencies among instantiations, or, equivalently, if all instantiations can be sorted topologically. In most parsing algorithms, variables range over positions in the input string. In order to determine complexity in the length n of the input string, it is sufficient to count the number of unique position variables in each rule. If all rules have at most k position variables, M = O(n k ), and parsing takes time O(n k ) in the length of the input string. In the remainder of this article, we will explore methods for minimizing k, the largest number of position variables in any rule, among equivalent deduction systems. These methods directly minimize the parsing complexity of the resulting deduction system. Although we will assume no cyclic dependencies among rule instantiations for the majority of the article, we will discuss the cyclic case in Section 2.2.
It is often possible to improve the computational complexity of a deduction rule by decomposing the computation into two or more new rules, each having a smaller number of variables than the original rule. We refer to this process as factorization. One straightforward example of rule factorization is the binarization of a CFG, as shown in Figure 2. Given a deduction rule for a CFG rule with r nonterminals on the righthand side, and a total of r + 1 variables, an equivalent set of rules can be produced, each with three variables, storing intermediate results that indicate that a substring of the original rule's righthand side has been recognized. This type of rule factorization produces an O(n 3 ) parser for any input CFG.
Another well-known instance of rule factorization is the hook trick of Eisner and Satta (1999), which reduces the complexity of parsing for bilexicalized CFGs from O(n 5 ) to O(n 4 ). The basic rule for bilexicalized parsing combines two CFG constituents marked with lexical heads as shown in Figure 3a. Here items with type C indicate constituents, with [C, x 0 , h, x 1 ] indicating a constituent extending from position x 0 to position x 1 , headed by the word at position h. The item [D, m → h] is used to indicate the weight assigned by the grammar to a bilexical dependency headed by the word at a)
w 1 : [A, x 0 , x 1 ] w 2 : [B, x 1 , x 2 ] w 3 : [C, x 2 , x 3 ] w 4 : [D, x 3 , x 4 ] w 0 w 1 w 2 w 3 w 4 : [S, x 0 , x 4 ] b) w 1 : [A, x 0 , x 1 ] w 2 : [B, x 1 , x 2 ] w 1 w 2 : [X, x 0 , x 2 ] w 5 : [X, x 0 , x 2 ] w 3 : [C, x 2 , x 3 ] w 3 w 5 : [Y, x 0 , x 3 ] w 6 : [Y, x 0 , x 3 ] w 3 : [D, x 3 , x 4 ] w 0 w 3 w 6 : [S, x 0 , x 3 ]
Figure 2
Binarization of the CFG rule S → A B C D as rule factorization: The deduction rule above can be factored into the three equivalent rule below.
a) w : [D, m → h] w 1 : [C, x 0 , h, x 1 ] w 2 : [C, x 1 , m, x 2 ] w w 1 w 2 : [C, x 0 , h, x 2 ] b) w : [D, m → h] w 2 : [C, x 1 , m, x 2 ] w w 2 : [H, h, x 1 , x 2 ] w h : [H, h, x 1 , x 2 ] w 1 : [C, x 0 , h, x 1 ] w h w 1 : [C, x 0 , h, x 2 ]
Figure 3
Rule factorization for bilexicalized parsing.
position h with the word at position m as a modifier. The deduction rule is broken into two steps, one which includes the weight for the bilexical grammar rule, and another which identifies the boundaries of the new constituent, as shown in Figure 3b. The hook trick has also been applied to Tree Adjoining Grammar (TAG; Eisner and Satta 2000), and has been generalized to improve the complexity of machine translation decoding under synchronous context-free grammars (SCFGs) with an n-gram language model (Huang, Zhang, and Gildea 2005).
Rule factorization has also been studied in the context of parsing for SCFGs. Unlike monolingual CFGs, SCFGs cannot always be binarized; depending on the permutation between nonterminals in the two languages, it may or may not be possible to reduce the rank, or number of nonterminals on the righthand side, of a rule. Algorithms for finding the optimal rank reduction of a specific rule are given by Zhang and Gildea (2007). The complexity of synchronous parsing for a rule of rank r is O(n 2r+2 ), so reducing rank improves parsing complexity.
Rule factorization has also been applied to Linear Context-Free Rewriting Systems (LCFRS), which generalize CFG, TAG, and SCFG to define a rewriting system where nonterminals may have arbitrary fan-out, which indicates the number of continuous spans that a nonterminal accounts for in the string (Vijay-Shankar, Weir, and Joshi 1987). Recent work has examined the problem of factorization of LCFRS rules in order to reduce rank without increasing grammar fan-out (Gómez-Rodríguez et al. 2009), as well as factorization with the goal of directly minimizing the parsing complexity of the new grammar (Gildea 2010).
We define factorization as a process which applies to rules of the input grammar independently. Individual rules are replaced with an equivalent set of new rules, which must derive the same set of consequent items as the original rule given the same antecedent items. While new intermediate items of distinct types may be produced, the set of items and weights derived by the original weighted deduction system is unchanged. This definition of factorization is broad enough to include all of the previous examples, but does not include, for example, the fold/unfold operation applied to grammars by Johnson (2007) and Eisner and Blatz (2007). Rule factorization corresponds to the unfold operation of fold/unfold.
If we allow unrestricted transformations of the input deduction system, finding the most efficient equivalent system is undecidable; this follows from the fact that it is undecidable whether a CFG generates the set of all strings (Bar-Hillel, Perles, and Shamir 1961), and would therefore be recognizable in constant time. Whereas the fold/unfold operation of Johnson (2007) and Eisner and Blatz (2007) specifies a narrower class of grammar transformations, no general algorithms are known for identifying an optimal series of transformations in this setting. Considering input rules independently allows us to provide algorithms for optimal factorization.
In this article, we wish to provide a general framework for factorization of deductive parsing systems in order to minimize computational complexity. We show how to apply the graph-theoretic property of treewidth to the factorization problem, and examine the question of whether efficient algorithms exist for optimizing the parsing complexity of general parsing systems in this framework. In particular, we show that the existence of a polynomial time algorithm for optimizing the parsing complexity of general LCFRS rules would imply an improved approximation algorithm for the wellstudied problem of treewidth of general graphs.
Treewidth and Rule Factorization
In this section, we introduce the graph-theoretic property known as treewidth, and show how it can be applied to rule factorization.
A tree decomposition of a graph G = (V, E) is a type of tree having a subset of G's vertices at each node. We define the nodes of this tree T to be the set I, and its edges to be the set F. The subset of V associated with node i of T is denoted by X i . A tree decomposition is therefore defined as a pair ({X i | i ∈ I}, T = (I, F)) where each X i , i ∈ I is a subset of V, and tree T has the following properties:
r Vertex cover: The nodes of the tree T cover all the vertices of G: i∈I X i = V. r Edge cover: Each edge in G is included in some node of T. That is, for all edges (u, v) ∈ E, there exists an i ∈ I with u, v ∈ X i . r Running intersection: The nodes of T containing a given vertex of G form a connected subtree. Mathematically, for all i, j, k ∈ I, if j is on the (unique) path from i to k in T, then X i X k ⊆ X j .
The treewidth of a tree decomposition ({X i }, T) is max i |X i | − 1. The treewidth of a graph is the minimum treewidth over all tree decompositions:
tw(G) = min ({X i },T)∈TD(G) max i |X i | − 1
where TD(G) is the set of valid tree decompositions of G. We refer to a tree decomposition achieving the minimum possible treewidth as being optimal. In general, more densely interconnected graphs have higher treewidth. Any tree has treewidth = 1; a graph consisting of one large cycle has treewidth = 2, and a fully connected graph of n vertices has treewidth = n − 1. Low treewidth indicates some treelike structure in the graph, as shown by the example with treewidth = 2 in Figure 4. As an example of the running intersection property, note that the vertex N appears in three adjacent nodes of the tree decomposition. Finding the treewidth of a graph is an NPcomplete problem (Arnborg, Corneil, and Proskurowski 1987). However, given a graph of n vertices and treewidth k, a simple algorithm finds the optimal tree decomposition in time O(n k+2 ) (Arnborg, Corneil, and Proskurowski 1987), and a variety of approximation algorithms and heuristics are known for the treewidth problem (Bodlaender et al. 1995;Amir 2001;Feige, Hajiaghayi, and Lee 2005). Furthermore, for fixed k, optimal tree decompositions can be computed in linear time (Bodlaender 1996).
Figure 4
A tree decomposition of a graph is a set of overlapping clusters of the graph's vertices, arranged in a tree. This example has treewidth = 2.
We can factorize a deduction rule by representing the rule as a graph, which we call a dependency graph, and searching for tree decompositions of this graph. For a rule r having n variables
V = {v i | i ∈ {1, . . . , n}}, m antecedent items A i , i ∈ {1, . . . , m}, and consequent C, let V(A i ) ⊂ V be the variables appearing in antecedent A i , and V(C) be the variables appearing in the consequent. The dependency graph representation of the rule is G r = (V, E = S:A 1 ,...,A m ,C {(v i , v j ) | v i , v j ∈ V(S)}).
That is, we have a vertex for each variable in the rule, and connect any two vertices that appear together in the same antecedent, or that appear together in the consequent.
The dependency graph representation allows us to prove the following result concerning parsing complexity:
Theorem 1
Given a deduction rule r for parsing where the input string is referenced only through position variables appearing as arguments of antecedent and consequent items, the optimal complexity of any factorization of rule r is O(n tw(G r )+1 ), where G r is the dependency graph derived from r.
Proof
One consequence of the definition of a tree decomposition is that, for any clique appearing in the original graph G r , there must exist a node in the tree decomposition T which contains all the vertices in the clique. We use this fact to show that there is a one-toone correspondence between tree decompositions of a rule's dependency graph G r and factorizations of the rule.
First, we need to show that any tree decomposition of G r can be used as a factorization of the original deduction rule. By our earlier definition, a factorization must derive the same set of consequent items from a given set of antecedent items as the original rule. Because G r includes a clique connecting all variables in the consequent C, the tree decomposition T must have a node X c such that V(C) ⊆ X c . We consider this node to be the root of T. The original deduction rule can be factorized into a new set of rules, one for each node in T. For node X c , the factorized rule has C as a consequent, and all other nodes X i have a new partial result as a consequent, consisting of the variables X i ∩ X j , where X j is X i 's neighbor on the path to the root node X c . We must guarantee that the factorized rule set yields the same result as the original rule, namely, the semiring sum over all variable values of the semiring product of the antecedents' weights. The tree structure of T corresponds to a factorization of this semiring expression. For example, if we represent the CFG rule of Figure 2a with the generalized semiring expression:
x 1 x 2 x 3 A(x 0 , x 1 ) ⊗ B(x 1 , x 2 ) ⊗ C(x 2 , x 3 ) ⊗ D(x 3 , x 4 )
the factorization of this expression corresponding to the binarized rule is
x 3 x 2 x 1 A(x 0 , x 1 ) ⊗ B(x 1 , x 2 ) ⊗ C(x 2 , x 3 ) ⊗ D(x 3 , x 4 )
where semiring operations ⊕ and ⊗ have been interchanged as allowed by the dependency graph for this rule. Because each antecedent A i is represented by a clique in the graph G r , the tree decomposition T must contain at least one node which includes all variables V(A i ). We can choose one such node and multiply in the weight of A i , given the values of variables V(A i ), at this step of the expression. The running intersection property of the tree decomposition guarantees that each variable has a consistent value at each point where it is referenced in the factorization.
The same properties guarantee that any valid rule factorization corresponds to a tree decomposition of the graph G r . We consider the tree decomposition with a set X i for each new rule r i , consisting of all variables used in r i , and with tree edges T defined by the producer/consumer relation over intermediate results in the rule factorization. Each antecedent of the original rule must appear in some new rule in the factorization, as must the consequent of the original rule. Therefore, all edges in the original rule's dependency graph G r appear in some tree node X i . Any variable that appears in two rules in the factorization must appear in all intermediate rules in order to ensure that the variable has a consistent value in all rules that reference it. This guarantees the running intersection property of the tree decomposition ({X i }, T). Thus any rule factorization, when viewed as a tree of sets of variables, has the properties that make it a valid tree decomposition of G r .
The theorem follows as a consequence of the one-to-one correspondence between rule factorizations and tree decompositions.
Computational Complexity
Factorization produces, for each input rule having m antecedents, at most m − 1 new rules, each containing at most the same number of nonterminals and the same number of variables as the input rule. Hence, the size of the new factorized grammar is O(|G| 2 ), and we avoid any possibility of an exponential increase in grammar size. Tighter bounds can be achieved for specific classes of input grammars.
The computational complexity of optimal factorization with tree decomposition is exponential in the size of the input rules. However, optimal factorization is generally feasible whenever parsing with the unfactorized grammar is feasible. This is because, for an input rule with variables, parsing is O(n ) in the sentence length n. The treewidth of this rule is at most − 1, and can be computed in time O( +1 ); generally we expect n to be greater than . One may also wish to accept only rules having treewidth k and disregard the remainder, for example, when factorizing rules automatically extracted from word-aligned bitext (Wellington, Waxmonsky, and Melamed 2006;Huang et al. 2009) or from dependency treebanks (Kuhlmann and Nivre 2006;Gildea 2010). In this setting, the rules having treewidth k can be identified in time O( k+2 ) using the simple algorithm of Arnborg, Corneil, and Proskurowski (1987), (where again is the number of variables in the input rules), or in time O( ) using the algorithm of Bodlaender (1996).
Cyclic Dependencies
Although this article primarily addresses the case where there are no cyclic dependencies between rule instantiations, we note here that our techniques carry over to the cyclic case under certain conditions. If there are cycles in the rule dependencies, but the semiring meets Knuth's (1977) definition of a superior function, parsing takes time O(M log M), where M is the number of rule instantiations, and the extra log M term accounts for maintaining an agenda as a priority queue (Nederhof 2003). Cycles in the rule dependencies may arise, for example, from chains of unary productions in a CFG; the properties of superior functions guarantee that unbounded chains need not be considered. The max-product semiring used in Viterbi parsing has this property, assuming that all rule weights are less than one, whereas for exact computation with the sum-product semiring, unbounded chains must be considered. As in the acyclic case, M = O(n k ) for parsing problems where rules have at most k variables. Under the assumption of superior functions, parsing takes time O(n k k log n) with Knuth's algorithm. In this setting, as in the acyclic case, minimizing k with tree decomposition minimizes parsing complexity.
Related Applications of Treewidth
The technique of using treewidth to minimize complexity has been applied to constraint satisfaction (Dechter and Pearl 1989), graphical models in machine learning (Jensen, Lauritzen, and Olesen 1990;Shafer and Shenoy 1990), and query optimization for databases (Chekuri and Rajaraman 1997). Our formulation of parsing is most closely related to logic programming; in this area treewidth has been applied to limit complexity in settings where either the deduction rules or the input database of ground facts have fixed treewidth (Flum, Frick, and Grohe 2002). Whereas Flum, Frick, and Grohe (2002) apply treewidth to nonrecursive datalog programs, our parsing programs have unbounded recursion, as the depth of the parse tree is not fixed in advance. Our results for parsing can be seen as a consequence of the fact that, even in the case of unbounded recursion, the complexity of (unweighted) datalog programs is linear in the number of possible rule instantiations (McAllester 2002).
Examples of Treewidth for Parsing
In this section, we show how a few well-known parsing algorithms can be derived automatically by finding the optimal tree decomposition of a dependency graph.
To aid in visualization of the graphical representation of deduction rules, we use a factor graph representation based on that of Kschischang, Frey, and Loeliger (2001) for Markov Random Fields. Our graphs have three types of nodes: variables, antecedents, and consequents. Each antecedent node is connected to the variables it contains, and represents the antecedent's weight as a function of those variables. Antecedent nodes are analogous to the factor nodes of Kschischang, Frey, and Loeliger (2001), and consequent nodes are a new feature of this representation. We can think of consequents as factors with weight = 1; they do not affect the weights computed, but serve to guarantee that the consequent of the original rule can be found in one node of the tree decomposition. We refer to both antecedent and consequent nodes as factor nodes. Replacing each factor node with a clique over its neighbor variables yields the dependency graph G r defined earlier. We represent variables with circles, antecedents with squares labeled with the antecedent's weight, and consequents with diamonds labeled c. An example factor graph for the simple CFG rule of Figure 1 is shown in Figure 5. Figure 6a shows the factor graph derived from the monolingual CFG rule with four children in Figure 2a. The dependency graph obtained by replacing each factor with a clique of size 2 (a single edge) is a graph with one large cycle, shown in Figure 6b. Finding the optimal tree decomposition yields a tree with nodes of size 3, {x 0 , x i , x i+1 } for each i, shown in Figure 6c. Each node in this tree decomposition corresponds to one of the factored deduction rules in Figure 2b. Thus, the tree decomposition shows us how
CFG Binarization
Figure 6
Treewidth applied to CFG binarization. to parse in time O(n 3 ); finding the tree decomposition of a long CFG rule is essentially equivalent to converting to Chomsky Normal Form.
The Hook Trick
The deduction rule for bilexicalized parsing shown in Figure 3a translates into the factor graph shown in Figure 7a. Factor nodes are created for the two existing constituents from the chart, with the first extending from position x 0 in the string to x 1 , and the second from x 1 to x 2 . Both factor nodes are connected not only to the start and end points, but also to the constituent's head word, h for the first constituent and m for the second (we show the construction of a left-headed constituent in the figure). An additional factor is connected only to h and m to represent the bilexicalized rule weight, expressed as a function of h and m, which is multiplied with the weight of the two existing constituents to derive the weight of the new constituent. The new constituent is represented by a consequent node at the top of the graph-the variables that will be relevant for its further combination with other constituents are its end points x 0 and x 2 and its head word h.
Placing an edge between each pair of variable nodes that share a factor, we get Figure 7b. If we compute the optimal tree decomposition for this graph, shown in Figure 7c, each of the two nodes corresponds to one of the factored rules in Figure 3b. The largest node of the tree decomposition has four variables, giving the O(n 4 ) algorithm of Eisner and Satta (1999).
SCFG Parsing Strategies
SCFGs generalize CFGs to generate two strings with isomorphic hierarchical structure simultaneously, and have become widely used as statistical models of machine translation (Galley et al. 2004;Chiang 2007). We write SCFG rules as productions with one
Figure 7
Treewidth applied to bilexicalized parsing. lefthand side nonterminal and two righthand side strings. Nonterminals in the two strings are linked with superscript indices; symbols with the same index must be further rewritten synchronously. For example,
X → A (1) B (2) C (3) D (4) , A (1) B (2) C (3) D (4)(1)
is a rule with four children and no reordering, whereas
X → A (1) B (2) C (3) D (4) , B (2) D (4) A (1) C (3)(2)
expresses a more complex reordering. In general, we can take indices in the first righthand-side string to be consecutive, and associate a permutation π with the second string. If we use X i for 0 ≤ i ≤ n as a set of variables over nonterminal symbols (for example, X 1 and X 2 may both stand for nonterminal A), we can write rules in the general form:
X 0 → X (1) 1 · · · X (n) n , X (π(1)) π(1) · · · X (π(n)) π(n)
Unlike monolingual CFGs, SCFGs cannot always be binarized. In fact, the languages of string pairs generated by a synchronous grammar can be arranged in an infinite hierarchy, with each rank ≥ 4 producing languages not possible with grammars restricted to smaller rules (Aho and Ullman 1972). For any grammar with maximum rank r, converting each rule into a single deduction rule yields an O(n 2r+2 ) parsing algorithm, because there are r + 1 boundary variables in each language. More efficient parsing algorithms are often possible for specific permutations, and, by Theorem 1, the best algorithm for a permutation can be found by computing the minimum-treewidth tree decomposition of the graph derived from the SCFG deduction rule for a specific permutation. For example, for the non-binarizable rule of Equation (2), the resulting factor graph is shown in Figure 8a, where variables x 0 , . . . , x 4 indicate position variables in one language of the synchronous grammar, and y 0 , . . . , y 4 are positions in the other language. The optimal tree decomposition for this rule is shown in Figure 8c. For this permutation, the optimal parsing algorithm takes time O(n 8 ), because the largest node in the tree decomposition of Figure 8c includes eight position variables. This result is intermediate between the O(n 6 ) for binarizable SCFGs, also known as Inversion Transduction Grammars (Wu 1997), and the O(n 10 ) that we would achieve by recognizing the rule in a single deduction step.
Gildea andŠtefankovič (2007) use a combinatorial argument to show that as the number of nonterminals r in an SCFG rule grows, the parsing complexity grows as Ω(n cr ) for some constant c. In other words, some very difficult permutations exist of all lengths.
It is interesting to note that although applying the tree decomposition technique to long CFG rules results in a deduction system equivalent to a binarized CFG, the individual deduction steps in the best parsing strategy for an SCFG rule do not in general correspond to SCFG rules. This is because the intermediate results may include more than one span in each language. These intermediate deduction steps do, however, correspond to LCFRS rules. We now turn to examine LCFRS in more detail.
LCFRS Parsing Strategies
LCFRS provides a generalization of a number of widely used formalisms in natural language processing, including CFG, TAG, SCFG, and synchronous TAG. LCFRS has also been used to model non-projective dependency grammars, and the LCFRS rules extracted from dependency treebanks can be quite complex , making factorization important. Similarly, LCFRS can model translation relations beyond the power of SCFG (Melamed, Satta, and Wellington 2004), and grammars extracted from word-aligned bilingual corpora can also be quite complex (Wellington, Waxmonsky, and Melamed 2006). An algorithm for factorization of LCFRS rules is presented by Gildea (2010), exploiting specific properties of LCFRS. The tree decomposition method achieves the same results without requiring analysis specific to LCFRS. In this section, we examine the complexity of rule factorization for general LCFRS grammars.
The problem of finding the optimal factorization of an arbitrary deduction rule is NP-complete. This follows from the NP-completeness of treewidth using the following construction: Given a graph, create a deduction rule with a variable for each vertex in the graph and an antecedent for each edge, containing the two variables associated with the edge's endpoints. The graphs produced by LCFRS grammar rules, however, have certain properties which may make more efficient factorization algorithms possible. We first define LCFRS precisely before examining the properties of these graphs.
An LCFRS is defined as a tuple G = (V T , V N , P, S), where V T is a set of terminal symbols, V N is a set of nonterminal symbols, P is a set of productions, and S ∈ V N is a distinguished start symbol. Associated with each nonterminal B is a fan-out ϕ(B), which tells how many continuous spans B covers. Productions p ∈ P take the form:
p : A → g(B 1 , B 2 , . . . , B r )( 3 )
where A, B 1 , . . . , B r ∈ V N , and g is a function
g : (V * T ) ϕ(B 1 ) × · · · × (V * T ) ϕ(B r ) → (V * T ) ϕ(A)
which specifies how to assemble the r i=1 ϕ(B i ) spans of the righthand side nonterminals into the ϕ(A) spans of the lefthand side nonterminal. The function g must be linear and non-erasing, which means that if we write g( s 1,1 , . . . , s 1,ϕ(B 1 ) , . . . , s 1,1 , . . . , s 1,ϕ(B r ) ) = t 1 , . . . , t ϕ(A) the tuple of strings t 1 , . . . , t ϕ(A) on the righthand side contains each variable s i,j from the lefthand side exactly once, and may also contain terminals from V T . The process of generating a string from an LCFRS grammar can be thought of as first choosing, topdown, a production to expand each nonterminal, and then, bottom-up, applying the functions associated with each production to build the string. As an example, the CFG
S → A B A → a B → b
corresponds to the following grammar in LCFRS notation:
S → g S (A, B) g S ( s A , s B ) = s A s B A → g A () g A () = a B → g B () g B () = b
Here, all nonterminals have fan-out = 1, reflected in the fact that all tuples defining the productions' functions contain just one string. As CFG is equivalent to LCFRS with fanout = 1, SCFG and TAG can be represented as LCFRS with fan-out = 2. Higher values of fan-out allow strictly more powerful grammars (Rambow and Satta 1999). Polynomialtime parsing is possible for any fixed LCFRS grammar, but the degree of the polynomial depends on the grammar. Parsing general LCFRS grammars, where the grammar is considered part of the input, is NP-complete (Satta 1992).
Graphs Derived from LCFRS Rules
Given an LCFRS rule as defined previously, a weighted deduction rule for a bottomup parser can be derived by creating an antecedent for each righthand nonterminal, a consequent for the lefthand side, and variables for all the boundaries of the nonterminals in the rule. A nonterminal of fan-out f has 2f boundaries. Each boundary variable will occur exactly twice in the deduction rule: either in two antecedents, if two nonterminals on the rule's righthand side are adjacent, or once in an antecedent and once in the consequent, if the variable indicates a boundary of any segment of the rule's lefthand side. Converting such deduction rules into dependency graphs, we see that the cliques of the dependency graph may be arbitrarily large, due to the unbounded fan-out of LCFRS nonterminals. However, each vertex appears in only two cliques, because each boundary variable in the rule is shared by exactly two nonterminals. In the remainder of this section, we consider whether the problem of finding the optimal tree decomposition of this restricted set of graphs is also NP-complete, or whether efficient algorithms may be possible in the LCFRS setting.
Approximation of Treewidth for General Graphs
We will show that an efficient algorithm for finding the factorization of an arbitrary LCFRS production that optimizes parsing complexity would imply the existence of an algorithm for treewidth that returns a result within a factor of 4∆(G) of the optimum, where ∆(G) is the maximum degree of the input graph. Although such an approximation algorithm may be possible, it would require progress in fundamental problems in graph theory.
Consider an arbitrary graph G = (V, E), and define k to be its treewidth, k = tw(G). We wish to construct a new graph G = (V , E ) from G in such a way that tw(G ) = tw(G) and every vertex in G has even degree. This can be accomplished by doubling the graph's edges in the manner shown in Figure 9. To double the edges, for every edge e = (u, v) in E, we add a new vertexê to G and add edges (u,ê) and (v,ê) to G . We also include every edge in the original graph G in G . Now, every vertex v in G has degree = 2, if it is a newly created vertex, or twice the degree of v in G otherwise, and therefore
∆(G ) = 2∆(G)( 4 )
We now show that tw(G ) = tw(G), under the assumption that tw(G) ≥ 3. Any tree decomposition of G can be adapted to a tree decomposition of G by adding a node containing {u, v,ê} for each edge e in the original graph, as shown in Figure 10. The new node can be attached to a node containing u and v; because u and v are connected by an edge in G, such a node must exist in G's tree decomposition. The vertexê will not occur anywhere else in the tree decomposition, and the occurrences of u and v still form a connected subtree. For each edge e = (u, v) in G , the tree decomposition must have a node containing u and v; this is the case because, if e is an original edge from G, there is already a node in the tree decomposition containing u and v, whereas if e is an edge to a newly added vertex in G , one of the newly added nodes in the tree decomposition
Figure 9
An example graph G ex and the result G ex of doubling G ex 's edges.
Figure 10
Tree decompositions of G ex and G ex .
will contain its endpoints. We constructed the new tree decomposition by adding nodes of size 3. Therefore, as long as the treewidth of G was at least 3, tw(G ) ≤ tw(G). In the other direction, because G is a subgraph of G , any tree decomposition of G forms a valid tree decomposition of G after removing the vertices in G − G, and hence tw(G ) ≥ tw(G). Therefore,
tw(G ) = tw(G)( 5 )
Because every vertex in G has even degree, G has an Eulerian tour, that is, a path visiting every edge exactly once, beginning and ending at the same vertex. Let π = π 1 , . . . , π n be the sequence of vertices along such a tour, with π 1 = π n . Note that the sequence π contains repeated elements. Let µ i , i ∈ {1, . . . , n} indicate how many times we have visited π i on the ith step of the tour: µ i = |{j | π j = π i , j ∈ {1, . . . , i}}|. We now construct an LCFRS production P with |V | righthand side nonterminals from the Eulerian tour: P : X → g(B 1 , . . . , B |V | ) g( s 1,1 , . . . , s 1,φ(B 1 ) , . . . , s |V |,1 , . . . , s |V |,φ(B |V | ) ) = s π 1 ,µ 1 · · · s π n ,µ n The fan-out φ(B i ) of each nonterminal B i is the number of times vertex i is visited on the Eulerian tour. The fan-out of the lefthand side nonterminal X is one, and the lefthand side is constructed by concatenating the spans of each nonterminal in the order specified by the Eulerian tour.
For the example graph in Figure 9, one valid tour is We now construct dependency graph G from the LCFRS production P by applying the technique of Section 2. G has n + 1 vertices, corresponding to the beginning and end points of the nonterminals in P. The edges in G are formed by adding a clique for each nonterminal in P connecting all its beginning and end points, that is, 2f 2 edges for a nonterminal of fan-out f . We must include a clique for X, the lefthand side of the production. However, because the righthand side of the production begins and ends with the same nonterminal, the vertices for the beginning and end points of X are already connected, so the lefthand side does not affect the graph structure for the entire production. By Theorem 1, the optimal parsing complexity of P is tw(G ) + 1.
π ex = A,
The graphs G and G are related in the following manner: Every edge in G corresponds to a vertex in G , and every vertex in G corresponds to a clique in G . We can identify vertices in G with unordered pairs of vertices {u, v} in G . The edges in G are ({u, v}, {u, w}) ∀u, v, w : u = v, u = w, v = w. An example of G derived from our example production P ex is shown in Figure 11.
Any tree decomposition T of G can be transformed into a valid tree decomposition T of G by simply replacing each vertex in each node of T with both corresponding vertices in G . If T witnesses a tree decomposition of optimal width k = tw(G ), each node in T will produce a node of size at most 2k in T . For any vertex v in G , one node in T must contain the clique corresponding to v in G . Each vertex {v, w} in G must be found in a contiguous subtree of T , and these subtrees all include the node containing the clique for v. The occurrences of v in T are the union of these contiguous subtrees, which must itself form a contiguous subtree. Furthermore, each edge (u, v) in G corresponds to some vertex in G , so u and v must occur together in some node of T . Combining these two properties, we see that T is a valid tree decomposition of G . From the construction, if SOL is the treewidth of T , we are guaranteed that
SOL ≤ 2tw(G )( 6 )
In the other direction, any tree decomposition T of G can be transformed into a tree decomposition T of G by simply replacing each occurrence of vertex v in a node of T with all vertices {v, w} in T . The number of such vertices is the degree of v, ∆(v).
Figure 11
Dependency graph G ex derived from the example of Figure 9. Vertex #A corresponds to the beginning of the Eulerian tour through G ex and A# corresponds to the end of the tour; all other vertices correspond to edges in G ex .
Each vertex {v, w} occurs in a contiguous subtree of T because v and w occurred in contiguous subtrees of T , and had to co-occur in at least one node of T . Each edge in G comes from a clique for some vertex v in G , so the edge has both its endpoints in any node of T corresponding to a node of T that contained v. Thus T is a valid tree decomposition of G . We expand each node in the tree decomposition by at most the maximum degree of the graph ∆(G ), and therefore tw(G ) ≤ ∆(G )tw(G )
Assume that we have an efficient algorithm for computing the optimal parsing strategy of an arbitrary LCFRS rule. Consider the following algorithm for finding a tree decomposition of an input graph G:
r Transform G to G of even degree, and construct LCFRS production P from an Eulerian tour of G .
r Find the optimal parsing strategy for P. r Translate this strategy into a tree decomposition of G of treewidth k , and map this into a tree decomposition of G , and then remove all new nodesê to obtain a tree decomposition of G of treewidth SOL.
If tw(G ) = k , we have SOL ≤ 2k from Equation (6), and k ≤ ∆(G )tw(G ) from Equation (7). Putting these together: SOL ≤ 2∆(G )tw(G ) and using Equations (4) and (5)
Theorem 2
An algorithm for finding the optimal parsing strategy of an arbitrary LCFRS production would imply a 4∆(G) approximation algorithm for treewidth.
Whether such an approximation algorithm for treewidth is possible is an open problem. The best-known result is the O( log k) approximation result of Feige, Hajiaghayi, and Lee (2005), which improves on the O(log k) result of Amir (2001). This indicates that, although polynomial-time factorization of LCFRS rules to optimize parsing complexity may be possible, it would require progress on general algorithms for treewidth.
Conclusion
We have demonstrated that a number of techniques used for specific parsing problems can be found algorithmically from declarative specifications of the grammar. Our method involves finding the optimal tree decomposition of a graph, which is in general an NP-complete problem. However, the relation to tree decomposition allows us to exploit existing algorithms for this problem, such as the linear time algorithm of Bodlaender (1996) for graphs of bounded treewidth. In practice, grammar rules are typically small, and finding the tree decomposition is not computationally expensive, and in fact is trivial in comparison to the original parsing problem. Given the special structure of the graphs derived from LCFRS productions, however, we have explored whether finding optimal tree decompositions of these graphs, and therefore optimal parsing strategies for LCFRS productions, is also NP-complete. Although a polynomial time algorithm for this problem would not necessarily imply that P = NP, it would require progress on fundamental, well-studied problems in graph theory. Therefore, it does not seem possible to exploit the special structure of graphs derived from LCFRS productions.
Figure 1
1CFG parsing in weighted deduction notation.
Figure 5
5Factor graph for the binary CFG deduction rule ofFigure 1.
Figure 8
8Treewidth applied to the SCFG rule of Equation (2).
to relate our result to the original graph G, SOL ≤ 4∆(G)tw(G) This last inequality proves the main result of this section
B, C, D, F, C, E, A, G, B, H, C, A This tour results in the following LCFRS production:P ex : X → g ex (A, B, C, D, E, F, G, H) g ex ( sA,1 , s A,2 , s A,3 , s B,1 , s B,2 , s C,1 , s C,2 , s C,3 , s D,1 , s E,1 , s F,1 , s G,1 , s H,1 ) =s A,1 s B,1 s C,1 s D,1 s F,1 s C,2 s E,1 s A,2 s G,1 s B,2 s H,1 s C,3 s A,3
AcknowledgmentsThis work was funded by NSF grants IIS-0546554 and IIS-0910611. We are grateful to Giorgio Satta for extensive discussions on grammar factorization, as well as for feedback on earlier drafts from Mehdi Hafezi Manshadi, Matt Post, and four anonymous reviewers.
The Theory of Parsing, Translation, and Compiling. Albert V Aho, Jeffery D Ullman, Prentice-Hall1Englewood Cliffs, NJAho, Albert V. and Jeffery D. Ullman. 1972. The Theory of Parsing, Translation, and Compiling, volume 1. Prentice-Hall, Englewood Cliffs, NJ.
Efficient approximation for triangulation of minimum treewidth. Eyal Amir, 17th Conference on Uncertainty in Artificial Intelligence. Seattle, WAAmir, Eyal. 2001. Efficient approximation for triangulation of minimum treewidth. In 17th Conference on Uncertainty in Artificial Intelligence, pages 7-15, Seattle, WA.
Complexity of finding embeddings in a k-tree. Arnborg, Derek G Stefen, Andrzej Corneil, Proskurowski, SIAM Journal of Algebraic and Discrete Methods. 8Arnborg, Stefen, Derek G. Corneil, and Andrzej Proskurowski. 1987. Complexity of finding embeddings in a k-tree. SIAM Journal of Algebraic and Discrete Methods, 8:277-284.
On formal properties of simple phrase structure grammars. Bar-Hillel, M Yehoshua, E Perles, Shamir, Language and Information: Selected Essays on Their Theory and Application. Addison-Wesley Reading, MA14Zeitschrift für Phonetik. Reprinted in Y. Bar-Hillel.Bar-Hillel, Yehoshua, M. Perles, and E. Shamir. 1961. On formal properties of simple phrase structure grammars. Zeitschrift für Phonetik, Sprachwissenschaft und Kommunikationsforschung, 14:143-172. Reprinted in Y. Bar-Hillel. (1964). Language and Information: Selected Essays on Their Theory and Application, Addison-Wesley Reading, MA, pages 116-150.
A linear time algorithm for finding tree decompositions of small treewidth. H L Bodlaender, SIAM Journal on Computing. 25Bodlaender, H. L. 1996. A linear time algorithm for finding tree decompositions of small treewidth. SIAM Journal on Computing, 25:1305-1317.
Approximating treewidth, pathwidth, frontsize, and shortest elimination tree. Hans L Bodlaender, R John, Gilbert, Journal of Algorithms. 182Hjálmtýr Hafsteinsson, and Ton KloksBodlaender, Hans L., John R. Gilbert, Hjálmtýr Hafsteinsson, and Ton Kloks. 1995. Approximating treewidth, pathwidth, frontsize, and shortest elimination tree. Journal of Algorithms, 18(2):238-255.
Conjunctive query containment revisited. Chandra Chekuri, Anand Rajaraman, Database Theory -ICDT '97. BerlinSpringer1186Chekuri, Chandra and Anand Rajaraman. 1997. Conjunctive query containment revisited. In Database Theory -ICDT '97, volume 1186 of Lecture Notes in Computer Science. Springer, Berlin, pages 56-70.
Hierarchical phrase-based translation. David Chiang, Computational Linguistics. 332Chiang, David. 2007. Hierarchical phrase-based translation. Computational Linguistics, 33(2):201-228.
Tree clustering for constraint networks. Rina Dechter, Judea Pearl, Artificial Intelligence. 383Dechter, Rina and Judea Pearl. 1989. Tree clustering for constraint networks. Artificial Intelligence, 38(3):353-366.
Program transformations for optimization of parsing algorithms and other weighted logic programs. Jason Eisner, John Blatz, Proceedings of FG 2006: The 11th Conference on Formal Grammar. Shuly WintnerFG 2006: The 11th Conference on Formal GrammarMalagaCSLI PublicationsEisner, Jason and John Blatz. 2007. Program transformations for optimization of parsing algorithms and other weighted logic programs. In Shuly Wintner, editor, Proceedings of FG 2006: The 11th Conference on Formal Grammar. CSLI Publications, pages 45-85, Malaga.
Efficient parsing for bilexical context-free grammars and head automaton grammars. Jason Eisner, Giorgio Satta, Proceedings of the 37th Annual Conference of the Association for Computational Linguistics (ACL-99). the 37th Annual Conference of the Association for Computational Linguistics (ACL-99)College Park, MDEisner, Jason and Giorgio Satta. 1999. Efficient parsing for bilexical context-free grammars and head automaton grammars. In Proceedings of the 37th Annual Conference of the Association for Computational Linguistics (ACL-99), pages 457-464, College Park, MD.
A faster parsing algorithm for lexicalized tree-adjoining grammars. Jason Eisner, Giorgio Satta, Proceedings of the 5th Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+5). the 5th Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+5)ParisEisner, Jason and Giorgio Satta. 2000. A faster parsing algorithm for lexicalized tree-adjoining grammars. In Proceedings of the 5th Workshop on Tree-Adjoining Grammars and Related Formalisms (TAG+5), pages 14-19, Paris.
Improved approximation algorithms for minimum-weight vertex separators. Uriel Feige, Mohammadtaghi Hajiaghayi, James R Lee, STOC '05: Proceedings of the thirty-seventh annual ACM symposium on Theory of computing. Baltimore, MDFeige, Uriel, MohammadTaghi Hajiaghayi, and James R. Lee. 2005. Improved approximation algorithms for minimum-weight vertex separators. In STOC '05: Proceedings of the thirty-seventh annual ACM symposium on Theory of computing, pages 563-572, Baltimore, MD.
Query evaluation via tree-decompositions. Jörg Flum, Markus Frick, Martin Grohe, Journal of the ACM. 496Flum, Jörg, Markus Frick, and Martin Grohe. 2002. Query evaluation via tree-decompositions. Journal of the ACM, 49(6):716-752.
Optimal parsing strategies for Linear Context-Free Rewriting Systems. Michel Galley, Mark Hopkins, Kevin Knight, Daniel Marcu, Proceedings of the 2004 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-04). the 2004 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-04)Boston, MA. Gildea, Daniel; Los Angeles, CAProceedings of the 2010 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-10)Galley, Michel, Mark Hopkins, Kevin Knight, and Daniel Marcu. 2004. What's in a translation rule? In Proceedings of the 2004 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-04), pages 273-280, Boston, MA. Gildea, Daniel. 2010. Optimal parsing strategies for Linear Context-Free Rewriting Systems. In Proceedings of the 2010 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-10), pages 769-776, Los Angeles, CA.
Optimal reduction of rule length in Linear Context-Free Rewriting Systems. Daniel Gildea, Danielštefankovič Gómez-Rodríguez, Carlos , Marco Kuhlmann, Giorgio Satta, David Weir, Proceedings of the 2007 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-07). the 2007 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-07)Rochester, NY; Boulder, COProceedings of the 2009 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-09)Gildea, Daniel and DanielŠtefankovič. 2007. Worst-case synchronous grammar rules. In Proceedings of the 2007 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-07), pages 147-154, Rochester, NY. Gómez-Rodríguez, Carlos, Marco Kuhlmann, Giorgio Satta, and David Weir. 2009. Optimal reduction of rule length in Linear Context-Free Rewriting Systems. In Proceedings of the 2009 Meeting of the North American Chapter of the Association for Computational Linguistics (NAACL-09), pages 539-547, Boulder, CO.
Joshua Goodman, Semiring parsing. Computational Linguistics. 25Goodman, Joshua. 1999. Semiring parsing. Computational Linguistics, 25(4):573-605.
Machine translation as lexicalized parsing with hooks. Liang Huang, Hao Zhang, Daniel Gildea, International Workshop on Parsing Technologies (IWPT05). VancouverHuang, Liang, Hao Zhang, and Daniel Gildea. 2005. Machine translation as lexicalized parsing with hooks. In International Workshop on Parsing Technologies (IWPT05), pages 65-73, Vancouver.
Binarization of synchronous context-free grammars. Liang Huang, Hao Zhang, Daniel Gildea, Kevin Knight, Computational Linguistics. 354Huang, Liang, Hao Zhang, Daniel Gildea, and Kevin Knight. 2009. Binarization of synchronous context-free grammars. Computational Linguistics, 35(4):559-595.
Bayesian updating in causal probabilistic networks by local computations. Finn V Jensen, L Steffen, Kristian G Lauritzen, Olesen, Computational Statistics Quarterly. 4Jensen, Finn V., Steffen L. Lauritzen, and Kristian G. Olesen. 1990. Bayesian updating in causal probabilistic networks by local computations. Computational Statistics Quarterly, 4:269-282.
Transforming projective bilexical dependency grammars into efficiently-parsable CFGs with unfold-fold. Mark Johnson, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPragueJohnson, Mark. 2007. Transforming projective bilexical dependency grammars into efficiently-parsable CFGs with unfold-fold. In Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics, pages 168-175, Prague.
A generalization of Dijkstra's algorithm. D Knuth, Information Processing Letters. 61Knuth, D. 1977. A generalization of Dijkstra's algorithm. Information Processing Letters, 6(1):1-5.
Factor graphs and the sum-product algorithm. F R Kschischang, B J Frey, H A Loeliger, IEEE Transactions on Information Theory. 472Kschischang, F. R., B. J. Frey, and H. A. Loeliger. 2001. Factor graphs and the sum-product algorithm. IEEE Transactions on Information Theory, 47(2):498-519.
Mildly non-projective dependency structures. Marco Kuhlmann, Joakim Nivre, Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06). the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06)SydneyKuhlmann, Marco and Joakim Nivre. 2006. Mildly non-projective dependency structures. In Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06), pages 507-514, Sydney.
Treebank grammar techniques for non-projective dependency parsing. Marco Kuhlmann, Giorgio Satta, Proceedings of the 12th Conference of the European Chapter of the ACL (EACL-09). the 12th Conference of the European Chapter of the ACL (EACL-09)AthensKuhlmann, Marco and Giorgio Satta. 2009. Treebank grammar techniques for non-projective dependency parsing. In Proceedings of the 12th Conference of the European Chapter of the ACL (EACL-09), pages 478-486, Athens.
On the complexity analysis of static analyses. David Mcallester, Journal of the ACM. 494McAllester, David. 2002. On the complexity analysis of static analyses. Journal of the ACM, 49(4):512-537.
Generalized multitext grammars. I Melamed, Giorgio Dan, Ben Satta, Wellington, Proceedings of the 42nd. the 42ndMelamed, I. Dan, Giorgio Satta, and Ben Wellington. 2004. Generalized multitext grammars. In Proceedings of the 42nd
Annual Conference of the Association for Computational Linguistics (ACL-04). BarcelonaAnnual Conference of the Association for Computational Linguistics (ACL-04), pages 661-668, Barcelona.
Weighted deductive parsing and Knuth's algorithm. M.-J Nederhof, Computational Linguistics. 291Nederhof, M.-J. 2003. Weighted deductive parsing and Knuth's algorithm. Computational Linguistics, 29(1):135-144.
Independent parallelism in finite copying parallel rewriting systems. Owen Rambow, Giorgio Satta, Theoretical Computer Science. 2231-2Rambow, Owen and Giorgio Satta. 1999. Independent parallelism in finite copying parallel rewriting systems. Theoretical Computer Science, 223(1-2):87-120.
Recognition of Linear Context-Free Rewriting Systems. Giorgio Satta, Proceedings of the 30th Annual Conference of the Association for Computational Linguistics (ACL-92). the 30th Annual Conference of the Association for Computational Linguistics (ACL-92)Newark, DESatta, Giorgio. 1992. Recognition of Linear Context-Free Rewriting Systems. In Proceedings of the 30th Annual Conference of the Association for Computational Linguistics (ACL-92), pages 89-95, Newark, DE.
Probability propagation. G Shafer, P Shenoy, Annals of Mathematics and Artificial Intelligence. 2Shafer, G. and P. Shenoy. 1990. Probability propagation. Annals of Mathematics and Artificial Intelligence, 2:327-353.
Principles and implementation of deductive parsing. Stuart M Shieber, Yves Schabes, Fernando C N Pereira, The Journal of Logic Programming. 241-2Shieber, Stuart M., Yves Schabes, and Fernando C. N. Pereira. 1995. Principles and implementation of deductive parsing. The Journal of Logic Programming, 24(1-2):3-36.
Characterizing structural descriptions produced by various grammatical formalisms. K Vijay-Shankar, D L Weir, A K Joshi, Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06). the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06)Stanford, CA. Wellington, Benjamin, Sonjia Waxmonsky, and I. Dan Melamed; SydneyProceedings of the 25th Annual Conference of the Association for Computational Linguistics (ACL-87)Vijay-Shankar, K., D. L. Weir, and A. K. Joshi. 1987. Characterizing structural descriptions produced by various grammatical formalisms. In Proceedings of the 25th Annual Conference of the Association for Computational Linguistics (ACL-87), pages 104-111, Stanford, CA. Wellington, Benjamin, Sonjia Waxmonsky, and I. Dan Melamed. 2006. Empirical lower bounds on the complexity of translational equivalence. In Proceedings of the International Conference on Computational Linguistics/Association for Computational Linguistics (COLING/ACL-06), pages 977-984, Sydney.
Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Dekai Wu, Computational Linguistics. 233Wu, Dekai. 1997. Stochastic inversion transduction grammars and bilingual parsing of parallel corpora. Computational Linguistics, 23(3):377-403.
Factorization of synchronous context-free grammars in linear time. Hao Zhang, Daniel Gildea, NAACL Workshop on Syntax and Structure in Statistical Translation (SSST). Rochester, NYZhang, Hao and Daniel Gildea. 2007. Factorization of synchronous context-free grammars in linear time. In NAACL Workshop on Syntax and Structure in Statistical Translation (SSST), pages 25-32, Rochester, NY. |
14,035,462 | PANGLYZER: SPANISH LANGUAGE ANALYSIS SYSTEM | The purpose of this paper is to describe the functions and procedures of the eight components of the Panglyzer Spanish analyzer of the knowledge-based engine of the Pangloss machine translation system. The Panglyzer follows a multi-pass approach consisting of preprocessing, part-of-speech tagging, phrase recognition, proper name classification, phrase analysis, clause recognition, clause analysis and reading ranking. | [] | PANGLYZER: SPANISH LANGUAGE ANALYSIS SYSTEM
David Farwell
Computing Research Laboratory
New Mexico State University
Steve Helmreich
Computing Research Laboratory
New Mexico State University
Wanying Jin
Computing Research Laboratory
New Mexico State University
Mark Casper
Computing Research Laboratory
New Mexico State University
Jim Hargrave
Computing Research Laboratory
New Mexico State University
Hugo Molina-Salgado
Computing Research Laboratory
New Mexico State University
Fuliang Weng
Computing Research Laboratory
New Mexico State University
PANGLYZER: SPANISH LANGUAGE ANALYSIS SYSTEM
The purpose of this paper is to describe the functions and procedures of the eight components of the Panglyzer Spanish analyzer of the knowledge-based engine of the Pangloss machine translation system. The Panglyzer follows a multi-pass approach consisting of preprocessing, part-of-speech tagging, phrase recognition, proper name classification, phrase analysis, clause recognition, clause analysis and reading ranking.
Introduction
The function of the Spanish analysis component of the PANGLOSS system, or PANGLYZER, is to provide for each clause in the input text a set of possible meaning representations ranked on the basis of likelihood. These are then handed to an augmentor which either interactively or automatically selects a representation from among the set of possibilities and fills in various sorts of information to produce a reading that is compatible with the context and that can act as a basis for generation.
The approach has been to develop the system in a bottom up manner focusing on providing layer after layer of increasingly abstract analysis in a multi-pass process. This multi-pass architecture allows for semi-independent module construction, incremental development, and is amenable to robust performance. Each level of analysis is based on a focussed type of knowledge and, to the extent possible, exploits proven techniques. The levels alternate between recognition (segmentation) modules and analysis modules. For example, the preprocessor segments text into words while the part-of-speech tagger analyzes their part of speech. Since a high premium has been placed on robustness, we are following an iterative approach to design which relies on rapidly producing an initial prototype and then following a short test and revise cycle. Thus, at this point, all but the deepest level of analysis produces throughput and the on-going objective is to improve the accuracy of that throughput from one test cycle to the next
System Architecture
The architecture of the PANGLYZER is a by-product of the bottom up approach to development described above. It consists of eight components that sequentially process the data. The first is a Preprocessor which converts a text length input ASCII character string into a file of PROLOG data structures and then builds PROLOG strings corresponding to the sentences of the input text. The second is a Spanish Part-of-Speech Tagger which provides a standard data structure for each item of an input PROLOG string which includes a part of speech, relevant inflectional information, and position index. The third component is the Phrase Recognizer which groups contiguous elements of the input part-ofspeech tagged string into phrase length chunks, inserting brackets around the chunks and assigning phrasal categories. The fourth component is a Proper-Noun Classifier which, operating over the entire text, assigns a semantic category such as personal name, place name, company name, and so on, along with relevant inflectional information to each element of the input tagged as a proper noun. The fifth component, the Phrase Analyzer, provides a set of possible semantic representations for each phrase of the input string. The sixth component is a Clause Recognizer which groups contiguous sets of phrase analyses in the input string into clause length chunks, inserting labeled brackets indicating clause types around the chunk and indexing its positions within the clause. The seventh component, the Clause Analyzer, assigns syntactic dependency relations such as head, subject-of, object-of, circumstantialmodifier-of, and so on to the various constituent phrases of each clause. The eighth component is the Reading Ranker which provides a likelihood score for each possible combination of phrase analyses given their contexts within the clause.
Preprocessor
The function of the Preprocessor is to convert a Spanish input text, in the form of an ASCII file, into a file of data structures. First, the input ASCII file is converted into a file of PROLOG strings each of which contains a single atom corresponding to a word or punctuation mark in the input text. In regard to the example text, In the second step, these unit strings are concatenated into PROLOG strings corresponding to the sentences of the input text. The Preprocessor takes as input the file of unit PROLOG strings above and yields a file of Prolog strings of the form:
['Al',nomento,de,su,venta,a,'Iberia',',','VIASA',...].
Spanish Part-of-Speech Tagger
The Spanish part-of-speech tagger automatically labels words with part-of-speech categories. The tagger uses an on-line dictionary, Spanish analytical morphology and fix-up rules to tag words. Fix-up rules use local context to alternate the part-of speech in case the part-of speech is inappropriate.
The morphological analyzer takes as input a sentence, analyzes each word and generates the corresponding lemma. It also produces morpho-syntactic information gleaned from the word form. Tag category lookup order was experimentally established and is used to decide the most likely analysis in case of ambiguity. The morphological analyzer uses the Collins Spanish-English Dictionary for single word lexical items and a custom built database for phrasals. The morphological analyzer supports all verbs found in Collins and their inflected forms, as well as most inflectional morphology for nouns, adjectives, pronouns and articles.
The preliminary tagging by the morphological analyzer results in a string of The fix-up rule component takes as input the list of word/lemma/tag sublists in the output above and uses local syntactic cues to repair common mistakes. The fix-up rules themselves try to match sequences of words, lemmas, parts of speech or inflectional information against the input. If a match is found, it triggers the revision of some particular part-of-speech tag. For instance, in the example above, the verb tag for venta is altered to a noun by a fix-up rule that changes the tag of a verb immediately following a possessive adjective to a noun (if the word in question has a possible noun reading).
Phrase Recognizer
The Phrase Recognizer identifies basic grammatical constructions at the phrase level using a Definite Clause Grammar (Pereira & Warren, 1980;Huang, 1988). 2 The DCG used by this module is composed of a collection of rules that identify noun phrases np, verbal constructions vc, prepositional phrases pp, proper noun phrases pn and preposition plus proper noun phrases ppn. Some words fall into special non-phrasal single-word categories such as conjunctions cj, complementizers cm and punctuation pc. The residue of unassimilated words are simply tagged as single-words sw. Examples of rules used in the DCG to tackle verb constructions and noun phrases are: It should be noted that these are generally not complex. Except for certain cases where the semantic analysis of the whole is assured to be correct such as phrases like 30 millones de dólares (30 million dollars), each prepositional phrase, appositive, and complement is analyzed separately. (See footnote 2). The input file used by the Phase Recognizer is the output from the Tagger. The Phrase Recognizer takes the first words in the input sentence and tries to match them with one of the constructions determined by the DCG. When one of the rules succeeds, the sequence of words which satisfies that rule is bracketed and labeled for the specified category. For instance, for the example sentence, the output begins with two prepositional phrase spanning elements 1 through 5.
[ [pp,[1,2] The recognition of grammatical structures proceeds sequentially until all the words in the input have been given a phrasal analysis. The final output file consists of lists of sublists corresponding to the basic phrasal constructions determined by the DCG. Each list corresponds to a sentence of the input text.
Proper-noun Classifier
The Proper-noun Classifier provides each proper noun with a unique classification from the following list of categories: government entity, geographical entity, corporate entity, human name, date, professional title. If classification into one of the above categories fails, a default classification of other is given. The component has a multi-pass architecture. That is, all proper nouns in a text are sent through pass one. Those that fail to be positively identified are sent through pass two. Again, those failing to be positively identified undergo pass three. On the first pass, classification is attempted via a listbased matching scheme. On the second pass, contextual information is used to further disambiguate non-unique tags from the first pass. Finally, default rules are applied to insure a unique classification for each proper noun.
The first pass involves searching lists for a complete or partial match with the proper noun under consideration. The lists used were compiled from a variety of resources including gazetteers, phone books, etc.
Pass one classification of a given proper noun actually is carried out in two phases. First, an attempt is made to uniquely tag the proper noun as a date, as a corporate entity, or as a title. If this process succeeds, the proper noun is considered to be positively identified, the tag is assigned and processing ends for that item. Once a tag has been assigned, the proper noun and tag are stored for future reference. If it fails, then the proper noun is exhaustively tagged with every possible classification found by matching the noun against the word lists. In doing so, a three-level tag-ranking system is employed to aid in future disambiguation. If this exhaustive tagging procedure results in only one possible tag, then the item is considered to be positively identified and the tag is assigned.
Pass two attempts to select one of the multiple tags of the exhaustively tagged proper nouns from the first pass. The first context considered is the set of already uniquely classified proper nouns. Any partial overlap with such nouns are given the corresponding classification, provided certain partial match criteria are satisfied.
The second type of context-based selection employed in pass two involves an analysis of the surrounding words, their parts of speech, and their proper-noun classifications (if any). A number of experimentally developed heuristics, involving different combinations of one or more of the above types of information, are applied to completely select a tag for the proper noun, or, at least, to further reduce the possible tags by filtering out one or more. Mechanically, the rules are applied starting from those which utilize the least amount of contextual information to those which utilize the most contextual information. If either of these processes select a single tag, the item is considered to be positively identified and that tag is assigned.
The final pass through the text involves applying default rules to any remaining multiply classified proper nouns. If exactly one of the possible classifications has highest rank, then it is the classification assigned to the proper noun; otherwise assign the default tag of other.
5.
Phrase Analyzer
The function of the Phrase Analyzer is to assign to each phrase identified by the Phrase Recognizer, as extended by the Proper-Noun Classifier, a set of possible meaning representations. The Phrase Analyzer constructs all possible interpretations for each phrase and then passes them on to the Clause Analyzer, which is to compose the semantically coherent readings at the clause level.
The performance of the Phrase Analyzer depends heavily on lexical information stored in the PANGLYZER's two lexicons: the Spanish lexicon, which encodes information specific to Spanish, and the word sense lexicon, which contains the semantic information which Spanish shares with other languages in general. Items in the two lexicons are tied to together by word sense tokens. These tokens are drawn from the sense definitions of Longman's Dictionary of Contemporary English (LDOCE) (Proctor, et al., 1978).
Below is an example of an entry for a singular form of a Spanish noun in Spanish lexicon. This entry provides information about agreement, gender, syntactic case, along with a word sense token indicating the item in the word sense lexicon with which it is associated. se_form(momento,ts,m,_F,moment_0_l).
Entries in the word sense lexicon of the conceptual category entity provide information about semantic class, countability, LDOCE semantic class, and LDOCE subject domain as shown in the following entry. The grammar rules in the Phrase Analyzer also access the lexicon entries and unify the syntactic and semantic information in the entries to produce the meaning representations like the one shown below. For the example phrase, the Phrase Analyzer produced two possible analyses: [[mod,[string,al,momento], [g_rel,B]
Clause Recognizer
The function of the clause recognizer is to group the phrases into clauses, inserting labeled clause boundaries in the process. It does this by applying a DCG for sequences of phrasal categories in topdown, depth-first, left-to-right fashion. It recognizes several types of clauses finite, relative, participial, infinitival etc. as well as groups of phrases that do not correspond to any of the sequences expected by the DCG. These are assigned to a no-clause category.
Generally, the DCG recognizes a sentence length input sequence of phrases as a single finite clause, as a single finite clause followed by other clauses or as a single non-clause. The grammar defines finite clauses as consisting of zero or more phrases followed by a finite verbal construction which is followed by zero or more other phrases. A phrase may be a relative clause, a passive participial clause, an infinitival clause, or some basic phrase. A relative clause consists of a relativizer followed by a finite verbal construction possibly followed by some number of phrases. A participial clause consists of a participial possibly followed by some number of phrases. Finally, an infinitival clause consists of a preposition followed by an infinitive possibly followed by some number of phrases. The resultant output, in simplified form, for the example sentence is:
Clause Analyzer
The function of the clause analyzer is to take a clause length sequences of phrases and identify the grammatical dependency relations among the phrases contained, outputting a set of tables each of which represents a possible dependency analysis of the clause.
Given an input sentence marked for clause boundaries, the component iteratively applies an as yet limited DCG to each clause. Basically, the DCG states that a clause consists of a subject followed by a finite verbal construction, optionally followed by an object, optionally followed by some number of adverbial modifiers or a clause consists of a passive participial optionally followed by an object.
Subjects and objects consists of a noun phrase, optionally followed by a prepositional phrase, or a preposition plus proper noun. Modifiers consist of a prepositional phrase or a preposition plus proper noun.
Applying this DCG to the input above results in an output of the following simplified form: [[l,l] The dependency relations have been captured in the tables appended to the end of each clause.
[[finite,
There is a second pass over each clause during which specific subgoals of the rules are ignored. So, for example, the identification of en promedio de 13 años de vuelo as a circumstantial modifier of tenían was made on the basis of applying one of clause rules in the DCG with the subject subgoal suspended.
Reading Ranker
The Reading Ranker provides a ranked listing of the possible input sentence readings produced by the Phrase Analyzer and Clause Analyzer. Essentially, the Reading Ranker must search and rank the space of all possible syntactic and semantic combinations.
In light of this combinatorially large search space, an attempt must be made to constrain its size. Currently, the use of both syntactic and semantic constraints are being considered. Syntactically, only the best parses from the Clause Analyzer can be evaluated. Also, part-of-speech tags can be used to reduce the number of possible word senses. Semantically, preference information should also eliminate or indicate the likelihood of certain word sense co-occurrences.
It is unlikely that the above constraints will sufficiently reduce the size of the search space so as to allow the use of conventional (exhaustive) search methods. Thus, weak search techniques, which do not exhaustively search the entire space, must be employed.
(Cowie, Guthrie and Guthrie, 1992) investigated simulated annealing for the purpose of word sense disambiguation. Their approach involved disambiguating all of the words in a sentence simultaneously, where the rank (evaluation) of a particular set of selected senses was determined by the Overlap of their LDOCE definitions. They report a word disambiguation accuracy of 47% at the sense level and 72% at the homograph level.
The LDOCE overlap disambiguation method described above was re-implemented with a genetic algorithm replacing simulated annealing as the weak search technique, with almost no change in accuracy. Part-of-speech information was then used to restrict the possible senses for the words. Not surprisingly, homograph level accuracy jumped to above 90%, due to the fact that in LDOCE, the senses of many words are grouped into homographs on the basis of their part of speech. Sense level accuracy, on the other hand, actually dropped by several points. In part this decrease in performance was due to an inadequate morphological analyzer which produced stems with incorrect parts of speech.
It is likely that the less than satisfactory results of both simulated annealing and the genetic algorithm are the result more of an ineffective evaluation metric than of an inherent inability to search this particular space. In fact, several experiments revealed that the best interpretation was being evaluated as much worse by the LDOCE evaluation metric than incorrect interpretations. The possibility of enhancing the LDOCE overlap method by including WordNet synsets of the sense definition words is being considered.
A certain degree of skepticism remains as to whether dictionary sense definition overlap or cooccurrence can be used to successfully disambiguate word senses. Thus, an investigation has been launched to determine whether statistically derived word sense meaning vectors might prove to be more successful. This approach has been used by (Schütze, 1993) and (Landauer, 1994) to disambiguate words and discriminate amongst synonyms. These vectors might be used in one of two ways: either as a more accurate evaluation metric for a weak search technique, or in a more direct word for word disambiguation attempt.
Summary and System Performance
The Panglyzer functions as the analysis portion of a knowledge-based MT engine. The results of this KBMT system are themselves only inputs to a multi-engine MT system (Pangloss). As a result, it is difficult to judge the performance of the Panglyzer solely on system (Pangloss) throughput, and we have yet to develop a notion of adequacy of analysis (Panglyzer) output. However, each module of the Panglyzer has a fairly well-defined task, and appropriate output for each module can be judged with a fair degree of accuracy.
For instance, the Preprocessor is able to identify sentence-and word-boundaries with near perfect accuracy. The Part-of-Speech Tagger operates at about 93% accuracy, when compared with the judgments of Spanish language experts. The Phrase Recognizer has an accuracy rate of 90% on Part-of-Speech Tagger output. Discounting Tagger errors its accuracy rate is roughly 98%. The Proper-Noun Classifier will classify about 80% of the proper nouns in a given text correctly.
The remainder of the module have yet to undergo rigorous testing. For the Phrase Analyzer, a performance estimate on the basis of a sample text produced an appropriate representation for 77% of the phrases in the text. This representation may be one of several produced for any particular phrase. When failure due to missing lexical items and incorrectly recognized phrases is discounted, appropriate representations were produced for 97% of the phrases. As is evident, the quality of the output of the Phrase Analyzer is highly dependent on having lexicon entries for the words in the text being analyzed.
Over a short text the Clause Recognizer identified about 76% of the clauses and 56% of the clauses contained the appropriate constituents. In a test over a text with 40 clauses, the Clause Analyzer produced correct and complete results in only 4 cases, partially correct in 25, and incorrect results in 11.
Finally, given the cascading architecture described here, the performance of any module usually depends greatly on the performance of the preceding modules. For example, an incorrect tag may well cause an incorrect phrase to be constructed, which may then be analyzed and grouped into a clause incorrectly.
To account for these difficulties, we are attempting to measure the performance of each module in two respects: first using actual system output from previous modules and second on the basis of manually corrected "golden" input.
,[[[1,1],['Al'/al/preposition]], [[2,2],[momento/momento/noun(masculine,singular)]]]] [[pp,[3,5],[[3,3],[de/de/preposition]], [[4,4],[su/su/adjective(neuter,sg)]], [[5,5],[venta/venta/noun(feminine,singular)]]]],...]
ent(moment_0_l,nrm,time,c,abstract,open).Input to the Phrase Analyzer comes from the Phrase Recognizer, as augmented by the Propernoun Classifier. In regard to the example sentence, the input begins with:[pp,[l,2],[[l,l],['Al'/al/preposition]], [[2,2],[momento/momento/noun(masculine,singular)]]] which corresponds to Spanish prepositional phrase al momento. The Phrase Analyzer uses a Spanish DCG to parse al momento. These Spanish grammar rules are compatible with the grammar rules in the Phrase Recognizer. The syntactic information on phrase category supplied by the Phrase Recognizer (i.e., np, pn, pp, ppn, etc.) is used to index the corresponding DCG rules of the Phrase Analyzer.
Although al is tagged as a preposition, under analysis it is treated as the preposition a and the article el. See the analysis at the end of Section 5.
These grammatical constructions do not directly correspond to any theoretical linguistic constructs. Instead, they are intended to identify potential arguments, predicates or adjuncts.
J References Cowie, J Guthrie, L Guthrie, Lexical Disambiguation using Simulated Annealing, Proceedings of the 15th International Conference on Computational Linguistics (COLING-92). Nantes, FranceReferences Cowie, J., Guthrie, J. and Guthrie, L., 1992, Lexical Disambiguation using Simulated Annealing, Pro- ceedings of the 15th International Conference on Computational Linguistics (COLING-92), Nantes, France, July, pp. 359-365.
XTRA: The Design and Implementation of a Fully Automatic Machine Translation System. X-M Huang, Memoranda in Computing and Cognitive Science. Las Cruces, New MexicoComputing Research Laboratory, New Mexico state UniversityMCCS-88-121Huang, X-M, 1988, XTRA: The Design and Implementation of a Fully Automatic Machine Translation System. Memoranda in Computing and Cognitive Science, MCCS-88-121, Computing Research Laboratory, New Mexico state University, Las Cruces, New Mexico.
How is it that you know so much?. T K Landauer, New Mexico State UniversityAn Invited lecture atLandauer, T. K., 1994, "How is it that you know so much?", An Invited lecture at New Mexico State University, January 1994.
Definite Clause Grammars for Language Analysis--A Survey of the Formalism and A Comparison with augmented Transition Networks. F Pereira, D Warren, Artificial Intelligence. 13Pereira, F. and Warren, D., 1980, Definite Clause Grammars for Language Analysis--A Survey of the Formalism and A Comparison with augmented Transition Networks. Artificial Intelligence, 13: 231-278.
. P Procter, Essex, EnglandLongman Dictionary of Contemporary English. Longman Group Limited, HarlowProcter, P., et al, 1978, Longman Dictionary of Contemporary English. Longman Group Limited, Har- low, Essex, England.
Word Space. H Schütze, Advances in Neural Information Processing Systems. S. J. Hanson, J. D. Cowan, and C. L. GilesSan Mateo CAMorgan Kaufmann Publishers5Schütze, H., 1993, Word Space. In S. J. Hanson, J. D. Cowan, and C. L. Giles (Eds.), Advances in Neu- ral Information Processing Systems 5, 895-902. San Mateo CA: Morgan Kaufmann Publishers. |
184,482,937 | ProblemSolver at SemEval-2019 Task 10: Sequence-to-Sequence Learning and Expression Trees | This paper describes our participation in SemEval-2019 shared task "Math Question Answering", where the aim is to create a program that could solve the Math SAT questions automatically as accurately as possible. We went with a dual-pronged approach, building a Sequence-to-Sequence Neural Network pre-trained with augmented data that could answer all categories of questions and a Tree system, which can only answer a certain type of questions. The systems did not perform well on the entire test data given in the task, but did decently on the questions they were actually capable of answering. The Sequence-to-Sequence Neural Network model managed to get slightly better than our baseline of guessing "A" for every question, while the Tree system additionally improved the results. | [
6205777,
184482945,
560565
] | ProblemSolver at SemEval-2019 Task 10: Sequence-to-Sequence Learning and Expression Trees
June 6-7, 2019
Xuefeng Luo
Linguistics Department
University of Tuebingen
Germany
Alina Baranova
Linguistics Department
University of Tuebingen
Germany
Jonas Biegert
Linguistics Department
University of Tuebingen
Germany
ProblemSolver at SemEval-2019 Task 10: Sequence-to-Sequence Learning and Expression Trees
Proceedings of the 13th International Workshop on Semantic Evaluation (SemEval-2019)
the 13th International Workshop on Semantic Evaluation (SemEval-2019)Minneapolis, Minnesota, USAJune 6-7, 20191292
This paper describes our participation in SemEval-2019 shared task "Math Question Answering", where the aim is to create a program that could solve the Math SAT questions automatically as accurately as possible. We went with a dual-pronged approach, building a Sequence-to-Sequence Neural Network pre-trained with augmented data that could answer all categories of questions and a Tree system, which can only answer a certain type of questions. The systems did not perform well on the entire test data given in the task, but did decently on the questions they were actually capable of answering. The Sequence-to-Sequence Neural Network model managed to get slightly better than our baseline of guessing "A" for every question, while the Tree system additionally improved the results.
Introduction
The data set for the task (Hopkins et al., 2019) includes questions used in the Math SAT. There are three broad categories of questions: closedvocabulary and open-vocabulary algebra questions, and geometry questions. All types of questions consist in large part of natural language. Closed-vocabulary algebra questions typically contain math equations and many mathspecific words, while open-vocabulary algebra questions include more everyday vocabulary and quantities which are expressed in letters, numbers or a combination of both. Geometry questions are usually provided with diagrams the analysis of which is necessary for solving the problems. Most questions of all categories are multiplechoice questions with five possible options, but some questions have a numeric answer.
We present two systems to tackle these math problems. One of them, a sequence-to-sequence LSTM model pre-trained with augmented data, is applied to all three types of questions, while the other, a system based on expression trees produces answers exclusively for open-vocabulary algebra questions.
Related Work
In their work, Roy and Roth (2015) introduced binary expression trees that represent and solve math word problems. To choose the best expression tree out of all possible trees, the authors employed two classifiers: a relevance classifier, which determined if the quantity should be included into the expression tree, and a Lowest Common Ancestor (LCA) classifier, which output the most probable mathematical operation for a pair of quantities. Both classifiers were trained on gold annotations.
Subsequently, two other systems were developed based on Roy and Roth (2015). One of the systems belongs to the same authors and uses the concept of Unit Dependency Graphs (UDGs) to capture the information between units of the quantities (Roy and Roth, 2017). UDGs are then united with expression trees, allowing the information about dependencies between units improve the math problem solver.
Another system (Wang et al., 2018) suggests a method to improve Roy and Roth's approach. By applying deep reinforcement learning, which has proved to be suitable for problems with big search space, the authors achieve better accuracy and efficiency.
An earlier system introduced by Wang et al. (2017) used gated recurrent units (GRU, Chung et al., 2014) and long short-memory (LSTM, Hochreiter and Schmidhuber, 1997) to automatically solve simple math word problems by converting words into math equations.
Model Description
Sequence-to-Sequence Neural Network
Our model is based on a sample implementation provided by the Keras team (Chollet et al., 2015). This model was able to calculate addition, such as from "535+61" to "596" with a Sequenceto-Sequence model using LSTM (Hochreiter and Schmidhuber, 1997). Similar to this model, our model also had 128 hidden units and started with an embedding layer with an alphabet of 96 characters. The longest question was 650 characters. Then, we used a LSTM as encoder. For all dights in the answers, we have seperate vectors representing them. Thus, we have repeated vectors of outputs 5 time, in order to represent 5 dights. Our decoder was another LSTM layer with returing sequences, followed by a time distributed layer of dense layers where activation function was softmax. In addition to this, we added a 0.2-rate Dropout layer (Srivastava et al., 2014) after embeding layer, encoder LSTM and decoder LSTM, to prevent over-fitting. On top of that, we found that reversing and doubling the inputs can greatly improve training performance, according to Zaremba and Sutskever (2015). The seq2seq model is shown by Figure 1. We did not encoded answers along with questions. We only compared the answers strings to the questions' choices and made our decisions. We padded all answers into same length with extra space characters, but our model still was not able to produce exact answers with sequence-tosequence. However, the sequences the model produced were good enough to predict the correct answer for multiple choice questions. For instance, for question "If x+345 = 111, what is the value of x?", the output of the system would be "-234444", which is very close to the correct answer "-234". Thus, we wrote a program which was able to compare the initial characters (including "-") regardless the extra characters at the end with answer options and predict the correct answer.
Math Questions
Tree System
The system of Roy and Roth (2015) has a lot of advantages: for instance, it can solve math problems that require multiple steps and different operations, and it can handle problems even if it did not see similar problems in the training set. That is why we chose to implement this approach for solving open-vocabulary algebra questions.
The expression trees the authors used in their system have a special structure that allows to calculate them in a simple and unambiguous way. In such a tree, the leaves are quantities extracted from the problem text, and the internal nodes are mathematical operations between the quantities. By calculating values of all internal nodes, one can obtain the value of the tree route, which corresponds to the answer of the problem.
Similarly to Roy and Roth (2015), we used the relevance and the LCA classifiers to evaluate all possible expression trees and choose the one that answers the problem correctly. However, instead of using gold annotations, we decided to train the classifiers on all the trees that result in right answers, partly because annotations were not available, and partly because we were curious how well the system could perform with no manual effort invested in annotating training data.
Tree evaluation was done by two simple multilayer perceptrons. As described earlier, the first one returns the probability of a given quantity to be relevant, as in a tree that answers the question correctly contains that quantity, the second one returns the probabilities for each of the possible operations to be the lowest common ancestor of a pair of given quantities in a tree that answers the question correctly.
For every possible tree per question, the product of the probabilities of each quantity to be relevant was added to the product of the probabilities of the lowest common ancestor of each quantity pair being correct. These scores, as well as the result of the tree were put in a list and ordered by score. The results of the trees were then matched against the answer options of each question and the answer option that was first matched in the list was given as the answer to the question. If the question had no answer options, the result of the highest rated tree was given as the answer to the question. Initially, we tried to trained our model directly on the questions, but it turned out that model could not learn at all. In total, we had slightly more than 1000 SAT questions, which was insufficient for an RNN model. Not to mention that the small training set contained questions with a variety of types -open-and closed-vocabulary algebra questions as well as geometry questions, leaving an even smaller training set for each subtype. Thus, data augmentation was a necessary step. In order to strengthen the connection of numbers, we did not provide original SAT data with numbers modified, but more than 600,000 simple closed-vocabulary algebra questions.
Among them, there were two types of questions augmented for our model. These included two types of questions within 3 digits like "If x + 345 = 111, what is the value of x?" and "If x − 345 = 111, what is the value of x?". Not only numbers and variable names were randomized but the positions of variables were switched. In toal, there were 614,236 questions, where "plus" had 330,620 and "minus" had 283,616 questions. Even though augmented data had large differences with SAT questions, results showed they still prodigiously contributed to our training.
Training
Rather than training our model together with original SAT data and augmented data, we chose to trained with augmented data first, and then continued to train with original data. There were 40 iterations of 614,236 questions dealing with addition and subtraction. Fractions were also present in the training set. After training with the augmented questions set, our model was trained with actual questions from the Math SAT. In total, there were 200 iterations of 805 Math SAT Questions. Nevertheless, since the training data was so small, it is highly possible that our model was prone to over-fitting to the training data.
Example 1 On a certain map, 100 miles is represented by 1 inch. What is the number of miles represented by 2.4 inches on this map?
Tree System
Quantities
Quantities were extracted from questions and answers using a rule-based approach. Before the extraction, all mentions of quantities were normalized to digits (e.g. one to 1). Then, numbers, number-word combinations (e.g. 13-inch), small letters denoting quantities (all letters except a) and L A T E X expressions were retrieved. L A T E X expressions that contained only numbers were transformed into numbers (e.g. \frac{1}{10} into 0.1).
In general, all questions that contained quantities other than numbers or the answer of which had several quantities were filtered out, leaving us with 75% of open-vocabulary questions from the training set. In the next stage, while constructing trees for the training data, we heuristically set the maximum number of quantities in a question to 7, which led to using 59% of the training data.
Operations and Tree Enumeration
Once quantities from the question were extracted, all their possible combinations were obtained, with size from two to the total number of quantities. The order of quantities in these combinations, however, stayed the same. Consider Example 1. For this word problem, the combination [100 2.4] would be possible, but the combination [2.4 100] would not.
For every combination obtained in the previous step, all possible expression trees with quantities as leaves and empty inner nodes were generated. These inner nodes were filled with all possible combinations of operation signs. As in earlier studies (Roy and Roth, 2017;Wang et al., 2018), we used six operations: apart from the standard +, −, × and ÷, we included reverse operators − rev and ÷ rev to account for the fact that the order of quantities stays the same in their combinations.
Like Roy and Roth (2015), we implemented constraints that define monotonic trees. These constraints are concerned with the order of multiplication operator in regard to division operator, and the order of addition operator in relation to subtraction operator. However, unlike the authors, we used these constraints to decrease the number System Accuracy Baseline (always "A") 14.3% Baseline + seq2seq 15.0% Baseline + trees 15.9% Baseline + seq2seq + trees 16.7% Table 1: Results of trees resulting in right answers, not to guarantee that any monotonic tree for the solution expression has the same LCA operation for any pair of quantities in it, as in Roy and Roth (2015).
Features
We used UDPipe (Straka and Straková, 2017) to parse questions' text and extract features for the classifiers. The features are identical to the ones that Roy and Roth (2015) describe.
Results
The results that our systems achieved are shown in Table 1. Our official submission consists only of the neural network, which achieved 15%, with the accuracy on closed-vocabulary algebra questions being 16% and the accuracy on the other two categories being 15% each. This result, however, was achieved by guessing "A" whenever the question could not be answer by the model. When guessing is removed, the overall accuracy drops to 2%. However, on the 109 questions the model could actually answer, it achieved 21% accuracy. In post-evaluation, after combining the results of the neural network and the tree system, we were able to achieve 17% accuracy overall by increasing the accuracy on open-vocabulary algebra questions by 20%. If we remove the guessing, the tree system achieves 3% accuracy overall, which stems from its 13% accuracy on open-vocabulary algebra questions. If we only count the questions that could actually be answered by the system, its accuracy would be equal to 26%. Without guessing, the combination of both systems produces 4% accuracy overall, with the distribution being 2% on closed-vocabulary algebra questions, 13% on open-vocabulary algebra questions and 0.4% on geometry questions. On the 205 questions answered by the combination of both systems, the accuracy was 23%.
Discussion/Conclusion
The results of our systems on the full data set are, frankly put, rather poor. Nevertheless, the tree system shows promising results in solving openvocabulary questions, if it is refined and improved, while the neural network seems not to perform well on any specific type of questions, although its overall performance is similar to that of the tree system.
Concerning the neural network, it might be beneficial to focuse on specific types of questions, instead of trying to train a model that deals with mixed types of questions. RNN best learnt on closed-and open-vocabulary algebra questions, therefore training separate models for these types could be one way to improve the system. In addition to that, a much larger dataset is critical in enhancing the model, thus promoting the accuracy of its predictions. Lastly, data augmentation would further improve the model. If we were to train a versatile model for mixed types of math questions, we could perform data augmentation on each type.
The current problem of the tree system lies to a large extent within the quality of the tree evaluation. It heavily relies on answer options to be available, as the average index of the first tree that produces an answer option in the score list is 47 (for the test data). Therefore, the answer of the highest-rated tree would most likely be wrong. Other aspects that could be improved include choosing other features for the classifiers, decreasing the scores of trees with low amounts of quantities (those trees are currently overrated) or using a different machine learning algorithm altogether, such as deep reinforcement learning (e.g Wang et al., 2018).
Apart from that, using no additional quantities in constructing trees, and including every quantity once made it difficult to obtain trees that not only gave the right result for the questions from the training set, but also answered them in a right way. Moreover, expanding expression trees to problems that involve letters to denote quantities would definitely contribute to improving the performance of the tree system.
Figure 1: Seq2seq Model
AcknowledgementsPart of the experiments reported on this paper was run on a Titan Xp donated by the NVIDIA Corporation.
. François Chollet, François Chollet et al. 2015. Keras. https:// keras.io.
Empirical evaluation of gated recurrent neural networks on sequence modeling. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, NIPS 2014 Workshop on Deep Learning. Junyoung Chung, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. Empirical evaluation of gated recurrent neural networks on sequence mod- eling. In NIPS 2014 Workshop on Deep Learning, December 2014.
Long short-term memory. Sepp Hochreiter, Jrgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural computation. 9Sepp Hochreiter and Jrgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9:1735- 80.
Semeval-2019 task 10: Math question answering. Mark Hopkins, Cristian Ronan Le Bras, Gabriel Petrescu-Prahova, Hannaneh Stanovsky, Rik Hajishirzi, Koncel-Kedziorski, Proceedings of International Workshop on Semantic Evaluation. International Workshop on Semantic EvaluationMinneapolis, USASemEval-2019Mark Hopkins, Ronan Le Bras, Cristian Petrescu- Prahova, Gabriel Stanovsky, Hannaneh Hajishirzi, and Rik Koncel-Kedziorski. 2019. Semeval-2019 task 10: Math question answering. In Proceedings of International Workshop on Semantic Evaluation (SemEval-2019), Minneapolis, USA.
Solving general arithmetic word problems. Subhro Roy, Dan Roth, 10.18653/v1/D15-1202Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsSubhro Roy and Dan Roth. 2015. Solving general arithmetic word problems. In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1743-1752. Associa- tion for Computational Linguistics.
Unit dependency graph and its application to arithmetic word problem solving. Subhro Roy, Dan Roth, AAAI. Subhro Roy and Dan Roth. 2017. Unit dependency graph and its application to arithmetic word problem solving. In AAAI.
Dropout: A simple way to prevent neural networks from overfitting. Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, Ruslan Salakhutdinov, Journal of Machine Learning Research. 15Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: A simple way to prevent neural networks from overfitting. Journal of Machine Learning Re- search, 15:1929-1958.
Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. Milan Straka, Jana Straková, Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaAssociation for Computational LinguisticsMilan Straka and Jana Straková. 2017. Tokenizing, pos tagging, lemmatizing and parsing ud 2.0 with udpipe. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 88-99, Vancouver, Canada. Association for Computational Linguistics.
Mathdqn: Solving arithmetic word problems via deep reinforcement learning. Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, Heng Tao Shen, AAAI. Lei Wang, Dongxiang Zhang, Lianli Gao, Jingkuan Song, Long Guo, and Heng Tao Shen. 2018. Math- dqn: Solving arithmetic word problems via deep re- inforcement learning. In AAAI.
Deep neural solver for math word problems. Yan Wang, Xiaojiang Liu, Shuming Shi, 10.18653/v1/D17-1088Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingYan Wang, Xiaojiang Liu, and Shuming Shi. 2017. Deep neural solver for math word problems. In Pro- ceedings of the 2017 Conference on Empirical Meth- ods in Natural Language Processing, pages 845- 854.
Wojciech Zaremba, Ilya Sutskever, arXiv:1410.4615Learning to execute. Computing Research Repository. Version 3Wojciech Zaremba and Ilya Sutskever. 2015. Learn- ing to execute. Computing Research Repository, arXiv:1410.4615. Version 3. |
52,128,323 | Verbal Multiword Expressions in Basque Corpora | This paper presents a Basque corpus where Verbal Multiword Expressions (VMWEs) were annotated following universal guidelines. Information on the annotation is given, and some ideas for discussion upon the guidelines are also proposed. The corpus is useful not only for NLPrelated research, but also to draw conclusions on Basque phraseology in comparison with other languages. | [
23006146,
1487867,
216923442,
9966389
] | Verbal Multiword Expressions in Basque Corpora
August 25-26. 2018
Uxoa Iñurrieta usoa.inurrieta@ehu.eus
Itziar Aduriz itziar.aduriz@ub.edu
Ainara Estarrona ainara.estarrona@ehu.eus
Itziar Gonzalez-Dios itziar.gonzalezd@ehu.eus
Antton Gurrutxaga a.gurrutxaga@elhuyar.eus
Elhuyar Foundation
IXA NLP group
University of Barcelona
Ruben Urizar ruben.urizar@ehu.eus
Iñaki Alegria i.alegria@ehu.eus
IXA NLP group
University of the Basque Country
Verbal Multiword Expressions in Basque Corpora
Proceedings of the Joint Workshop on , Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)
the Joint Workshop on , Linguistic Annotation, Multiword Expressions and Constructions (LAW-MWE-CxG-2018)Santa Fe, New Mexico, USAAugust 25-26. 201886
This paper presents a Basque corpus where Verbal Multiword Expressions (VMWEs) were annotated following universal guidelines. Information on the annotation is given, and some ideas for discussion upon the guidelines are also proposed. The corpus is useful not only for NLPrelated research, but also to draw conclusions on Basque phraseology in comparison with other languages.
Introduction
For Natural Language Processing (NLP) tools to produce good-quality results, it is necessary to detect which words need to be treated together (Sag et al., 2002;Savary et al., 2015). However, identifying Multiword Expressions (MWEs) is a challenging task for NLP, and current tools still struggle to do this properly. This is mainly due to the multiple morphosyntactic variants that these kinds of word combinations can have, especially when their syntactic head is a verb.
(1) They made a decision.
(2) They made some difficult decisions.
(3) The decisions they made were correct.
In order to promote research on this topic, the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions (VMWEs) was organised (Savary et al., 2017), which holds its second edition this year. One of the outcomes of this initiative is an MWE-annotated corpus including 20 languages. Along with other relevant resources (Losnegaard et al., 2016), this kind of corpus can be helpful to tackle the problems posed by MWEs to NLP. The present paper aims at describing the Basque annotation carried out for this Shared Task (ST), Basque being one of the novel languages included in the new edition.
Comprehensive work has been done on Basque MWEs, not only from a linguistic perspective (Zabala, 2004), but also concerning identification within parsing (Alegria et al., 2004), extraction of VMWEs for lexicographical purposes (Gurrutxaga and Alegria, 2011) and translation (Inurrieta et al., 2017). Nevertheless, this is the first corpus where these kinds of expressions are manually annotated 1 .
The paper starts by introducing what resources are used (Section 2), and it goes on to briefly describe how the annotation process was done overall (Section 3). Then, the main confusing issues concerning Basque VMWEs are commented on (Section 4), and a few questions about the guidelines are proposed for future discussion (Section 5). Some remarks about Basque VMWEs are also made based on the annotated corpus (Section 6), and finally, conclusions are drawn (Section 7). This work is licenced under a Creative Commons Attribution 4.0 International Licence. Licence details: http: //creativecommons.org/licenses/by/4.0/ 1 Annotation of Verb+Noun MWEs in Basque was carried out by Gurrutxaga and Alegria (2011), but note that this was not done on corpora but on automatically extracted out-of-context word combinations.
Resources and setup
For the annotation described in this paper, a Basque corpus was created by collecting texts from two different sources: (A) 6,621 sentences from the Universal Dependencies treebank for Basque (Aranzabe et al., 2015), that is, the whole UD treebank, and (B) 4,537 sentences taken from the Elhuyar Web Corpora 2 . Thus, in all, the Basque corpus consists of 11,158 sentences (157,807 words).
The UD subcorpus comprises news from Basque media, whereas the Elhuyar subcorpus consists of texts which were automatically extracted from the web. Although only good-quality sources were selected and a cleanup was done before performing the annotation, a few strange sentences can still be found in this part due to automatic extraction (such as sentences missing some words or a few words in languages other than Basque). Scripts made available by the ST organisers 3 were used to prepare the corpus before and after annotation.
Likewise, the annotation guidelines 4 created specifically for the ST edition 1.1 were used. The guidelines are intended to be universal and were the result of thoughtful discussions among experts from many different languages (Savary et al., 2018). Six different categories of VMWEs are included in the guidelines, but only two of them are applicable to Basque: Verbal Idioms (VID) and Light Verb Constructions (LVCs), the latter being divided into two subcategories, LVC.full and LVC.cause. All of them are universal categories.
Detailed information about each of the categories can be found in the guidelines, as well as decision trees and specific tests provided in order to make it easier to decide whether/how a given combination should be annotated. As a brief explanation to better follow the content of this paper, categories can be broadly defined as follows.
• VID: combinations of a verb and at least another lexicalised component whose meaning is not derivable from the separate meanings of the component words.
(4) adarra jo 5 horn-the.ABS play '(to) trick, (to) pull somebody's leg'
• LVC.full: combinations of a verb and a noun phrase (sometimes introduced or followed by an adposition) where the noun denotes an event or state and the verb adds only morphological features but no meaning.
(5) proba egin test.BARE do '(to) try'
• LVC.cause: combinations of a verb and a noun phrase (sometimes introduced or followed by an adposition) where the noun denotes an event or state and the verb is causative.
(6) berri izan news.BARE have '(to) know (about), (to) have heard (of)'
As for the annotation platform, FLAT 6 was used, which has a very user-friendly interface and greatly simplifies the task of adding, deleting or modifying tags.
The annotation process
The annotation process had several phases. First of all, a few training sessions were organised with a dual objective: on the one hand, to help participants get familiarised with the guidelines and the annotation platform; on the other hand, to identify tricky issues that might arise from annotating Basque VMWEs in corpora. Some decisions were made on problematic cases, which were then collected in an internal document to be used as a reference tool along with the guidelines.
Six experts took part in this annotation task: five linguists and a lexicographer, most of which have broad experience in the field of phraseology. The training sessions will now be briefly described (Section 3.1), and some more details on the final annotated corpus will be given (Section 3.2).
Training sessions
After receiving explanations about the guidelines and the annotation platform, all participants were asked to annotate the same part of the corpus: 500 sentences in all. At this first attempt, the degree of disagreement was considerably high among annotators, whose number of tags varied from 85 to 170 for the same sentences. The main reason for this was that two oposed positions were adopted: whereas some participants marked everything which showed any kind of similarity with VMWEs, others opted for annotating only the cases they were completely sure of.
All examples which caused disagreements were collected and classified, and three more sessions were organised, where participants tried to reach an agreement on the main problematic cases. A lot of the differently-annotated sentences were quite easy to decide on, as they were due to misunderstandings on basic concepts, either related to general language or to the guidelines. The rest of the cases, however, required further discussion. Decisions made on these cases were collected in an internal document for Basque annotators, so that they knew what criteria they should follow. Details about this document will be given in Section 4.
Final annotation and Inter-Annotator Agreement
After disagreements were discussed and decided on, each annotator was assigned some texts, and a small part of the corpus was double-annotated as a basis to calculate Inter-Annotator Agreement (IAA). This subcorpus was fully annotated by one participant, and was then split into two parts, so that two more annotators would work on one part each. Following the measurements of the first edition of the ST, the final IAA scores for Basque are sumed up in Table 1 As it can be noticed, scores are noteworthily high for all three measures. This is presumably an outcome of, on the one hand, the clarity of the guidelines and the specific tests provided, and on the other hand, the effectiveness of the training sessions held before starting the real annotation. Additionally, as a further step towards ensuring the unity of all annotations, consistency checks were performed once the main annotations were finished. Considering that before such checks these IAA scores were already much higher than average (comparing to the rest of the languages included in the ST), the good quality of this resource becomes evident beyond doubt.
The final annotated corpus comprises 3,823 VMWE tags of three categories in a total of 11,158 sentences. General data about the annotations is collected in Table 2, and further comments on them will be made in Section 6. As pointed out previously, all the conclusions drawn from the training sessions were collected in an internal document for annotators. The main issues found during the annotation of Basque VMWEs will now be commented on, and the decisions made for each of the issues will be explained. Note that only general questions will be brought here. Individual cases which led to disagreements among annotators will not be included in this section, although a few examples of this kind were also collected.
Morphological variation of the nouns inside LVCs
In Basque, noun phrases almost always have a determiner, and there are hardly any instances of "bare" nouns (Laka, 1996), that is, nouns with no determiner at all. However, the presence of this kind of noun followed by a (usually light) verb seems to be a common characteristic among VMWEs. More specifically, it is frequent in VMWEs which denote very common actions, usually expressed by single verbs in other languages.
(7) lo egin sleep.BARE do '(to) sleep', (ES) 'dormir', (FR) 'dormir' (8) hitz egin word.BARE do '(to) speak', (ES) 'hablar', (FR) 'parler'
While some of these VMWEs accept almost no morphological modification in the noun phrase, others are also used with determiners and modifiers, as the one shown in Examples (9)-(10). In these cases, the VMWEs display a canonical morphosyntactic variation.
(9) lan egin work.BARE do '(to) work'
(10) lana egin work-the.ABS do '(to) work, (to) do some work'
Morphological variants of this kind of LVC caused some trouble to annotators at the beginning, probably because only variants where the noun is "bare" are currently considered MWEs by Basque parsers (Alegria et al., 2004). Although it has sometimes been argued that instances with a determiner should not be treated as VMWEs, they pass all the LVC tests in the guidelines. Thus, our decision was to annotate these kinds of combinations both when they have some determiner and when they do not.
The future time in LVCs containing the verb izan
Izan 'have/be' is one of the most common verbs inside Basque LVCs, but it is also an auxiliary verb, which can be confusing for annotators sometimes. The usage of this verb is somewhat peculiar concerning the future form of LVCs. When we want to express that a given action will happen in the future, the verb participle is inflected by taking the morpheme -ko/-go at the end. However, this morpheme does not always follow the verb when an LVC with izan is used: in many cases, it can also be attached to the noun inside the VMWE, eliding the verb. (12), the -go morpheme is attached to the verb as usual, while in Example (13) the verb is elided, and the morpheme -ko is added to the noun behar instead 8 . Whereas the first two cases must be annotated, there is no VMWE in the third one, as only one lexicalised component is present, behar.
The fact that izan is also an auxiliary verb makes it easy to mistakenly think that the auxiliary after a word like beharko is a lexicalised component of the VMWE. However, this difference is an important detail annotators should always bear in mind. To see this difference, it can be helpful to use a morphological analyzer like Morfeus (Alegria et al., 1996), as it analyses beharko as an inflected form of behar izan.
The blurred limit between adjectives and nouns in Basque VMWEs
All languages have words which can belong to more than one different part of speech. In some Basque VMWEs, it is not always clear if the non-verbal element is a noun or an adjective, and many parsers struggle to get the right tag. For instance, the word gose 'hunger/hungry' can be either one or the other depending on the context, even though its usage as an adjective is quite marginal nowadays. In Examples (14)-(15), two VMWEs containing this word and the verb izan 'be/have' are shown. Although intuition indicates us that gose is an adjective in Example (14) but a noun in (15), it is very common for parsers to tag both instances as nouns.
(14) gose naiz hungry/hunger.BARE be.1PS.PR 'I am hungry.'
(15) gosea dut 9 hunger-the.ABS have.1PS.PR 'I am hungry.'
Besides, sometimes, the usage of a word which always holds one category may even suggest that it belongs to a different part of speech within a VMWE. For instance, the first element in the expression nahi izan (wish.BARE have → '(to) want') can take the comparative suffix -ago, which is used to grade adjectives and adverbs: nahiago izan (wish-more have → '(to) prefer'). This usage may suggest that nahi is used as an adjective in this expression, even if it is always used as a noun out of it.
For coherence, it was concluded that these kinds of examples should all be grouped equally, and they were classified in the LVC categories. Given that the non-verbal element is sometimes closer to adjectives than to nouns, it could be pertinent to add a note in the guidelines along with the one about Hindi, which states "the noun can be replaced by an adjective which is morphologically identical to an eventive noun". Exactly the same could be applied to Basque as well.
(16) bizi izan live/life be '(to) live'
In fact, as the adjectives of this kind have identical nouns, combinations like the one in Example (16) pass LVC tests with no difficulty, and thus, this is the category they were assigned, regardless of their adjectival nature.
(Apparently) cranberry words inside LVCs
Some VMWEs which have reached us from a former stage of the language may present some idiosyncrasies from a diachronic perspective, e.g. the lack of determiners in noun phrases (see Section 4.1). They may also contain words which are only used within the context of a given verbal expression. For example, the word merezi is almost exclusively used as part of the VMWE merezi izan 'to deserve'.
Something similar occurs with ari in the verbal expression ari izan, which is categorised as a complex aspectual verb in Basque grammars (Etxepare, 2003). It is used in phrases such as lanean ari izan 'to be at work' and becomes grammaticalised when used to make the continuous forms of verbs, as in jaten ari izan 'to be eating'.
For the vast majority of Basque speakers, it is not a straight-forward assumption that these words are nouns. Nevertheless, if we take a look at the Orotariko Euskal Hiztegia (Mitxelena, 1987), the reference historical dictionary created by the Royal Academy of the Basque language, Euskaltzaindia 10 , we realise that these words have an entry by themselves and are actually classified as nouns. Futhermore, while speakers might first think that these expressions do not pass test LVC.5, that is, that the verb can be ommitted when a possessive is added to the noun, some examples 11 of this kind can be found in the dictionary:
(17) Eman diote (...) bere merezia.
give To sum up, although some non-verbal elements in VMWEs might look like cranberry words, it is important to contrast information with reference material, especially when the verb is accompanied by a light verb. For the examples mentioned here, it was clear to us that LVC.full was the category where they fitted best.
Discussion on some conceptions in the guidelines
Overall, it is a remarkable point that the most controversial issues during the training sessions were all related to LVCs. This is probably an effect of the very high frequency of this type of VMWE in Basque corpora (more details will be given in Section 6), but it should also be considered that, as far as LVCs are concerned, there are notable differences between the guidelines and the rest of the literature on Basque (and Spanish) phraseology. Therefore, it is very likely that this fact has also conditioned the doubts arisen to participants.
It is an enormous challenge to create universal guidelines in a field like phraseology, where boundaries are never as definite as NLP tools would need. The guidelines created for both PARSEME Shared Tasks are a really important step towards unifying different conceptions about MWEs, and the clarity of tests simplifies the annotation task greatly. However, some points might still benefit from further consideration, which will be briefly noted here. If these points were problematic in other languages as well, the ideas presented in this section could be used as a starting point for future discussion.
Two main notions will be mentioned here related to the gap existent between the guidelines and our previous conceptions about phraseology: on the one hand, the understanding of collocations as a phenomenon separate from MWEs (Section 5.1), and on the other hand, the fact that LVCs are defined as combinations of a verb and a noun phrase only (Section 5.2).
Collocations as non-VMWEs
LVCs are usually understood as a subcategory of collocations in the reference literature about Basque phraseology (Urizar, 2012;Gurrutxaga and Alegria, 2013), as well as in that about Spanish phraseology (Corpas Pastor, 1997). However, in the guidelines, collocations are defined as a mere statistical phenomenon, and they are discriminated not only from LVCs but also from VMWEs in general. The line separating ones and others was not always clear, and despite the comprehensive tests, annotators sometimes found it hard not to annotate some instances which, according to them, were clearly related to phraseology somehow.
(19) deia egin call-the.ABS make '(to) make a call'
(20) deia jaso call-the.ABS receive '(to) receive a call'
For instance, the guidelines say that, whereas the combination in Example (19) must be annotated, the one in Example (20) must not. The fact that one passes all tests and the other one does not made it relatively easy to let the second example apart. However, it is still not that evident to us that it should not be treated as a VMWE at all, since the noun deia 'call' always chooses the verb jaso 'receive' to express that meaning. As a matter of fact, it is extremely rare to see it accompanied by other verbs which could equally express that meaning, such as eduki 'have'. Similar examples were found quite often in the corpus, so it might be worth examining those cases further for future editions.
LVCs accepting only noun phrases
On the other hand, according to the guidelines, LVCs can only be composed of a light verb and a noun phrase (except for Hindi, as it is pointed out in Section 4.3). This noun phrases can be preceded by prepositions or followed by postpositions. According to this, VMWEs like the one in Example (21) should not be annotated as LVC.full, as korrika is an adverb.
(21) korrika egin running.ADV do '(to) run'
By definition, LVCs are VMWEs where the verb is void of meaning and the other component carries the whole semantic weight about the event or state the combination denotes. In Basque, many events can be expressed by adverbs, and this definition could equally be applied to constructions of adverbs and light verbs like the one in Example (21).
Furthermore, many of these adverbs are created by attaching a suffix to a noun, often -ka, such as hazka 'scratching', which comes from hatz 'finger' and forms part of the VMWE hazka egin (scratching do → '(to) scratch'). Thus, the LVC.full and LVC.cause categories would probably be more coherent if they had a wider scope and this kind of combination was also considered.
6 Information about Basque VMWEs inferred from annotations As already mentioned, VMWEs from three different categories were annotated in Basque: VID, LVC.full and LVC.cause. Table 2 shows how many tags there are in the corpus, where the number of VMWEs annotated as LVC.full clearly stands out from the rest: 75% of all tags belong to this category. If we add the instances in the LVC.cause group to this number, the whole group of LVCs amounts to almost 80% of all annotations. This is not surprising, since, as it is pointed out in Section 4.1, it is not strange that very common actions expressed by single verbs in some other languages are denoted by an LVC in Basque. Thus, it was to be expected that the number of instances in this category would be higher in our corpus than in other languages. On the other hand, the number of instances annotated as LVC.cause is very low (less than 5% of all tags), and this seems to be quite a common tendency also in other languages. Considering only annotations from the three universal categories, the average percentage of VMWEs classified in this group is only 3% (taking all 20 languages into account). This might be a sign that either: (A) the LVC.cause category would be better merged with the LVC.full one, or (B) maybe it would be a good idea to broaden this category so that it includes combinations that are not yet annotated, such as collocations.
Concerning morphology, the VMWEs in the Basque corpus are mostly combinations of a verb and a noun (94%) 13 , which was easy to anticipate considering that LVCs can only be of this kind according to the guidelines. Consistent with other work about VMWEs in dictionaries (Inurrieta et al., 2017), such nouns are mainly found in the absolutive case (85%) in the corpus, and among the rest, the locative is the most frequent postposition, as in Example (22).
(22) jolasean ibili game-the.LOC be '(to) be playing, (to) play'
Something comparable probably happens in other languages as well. In the Spanish corpus, for example, out of the VMWEs where the main constituents are a verb and a noun, only 23% include a preposition.
Conclusion
VMWEs were annotated in a 11,158-sentence Basque corpus, following the universal guidelines of edition 1.1 of the PARSEME Shared Task on Automatic Identification of Verbal Multiword Expressions. In all, 3,823 instances were annotated and classified into two main categories: Verbal Idioms and Light Verb Constructions. High Inter-Annotator Agreement scores make it evident that this is a very good-quality resource, which can be useful not only for NLP-related research, but also for future studies on Basque phraseology.
After explaining how the annotation process was organised, the main doubts arisen to Basque annotators while performing this task were commented on in this paper. The decisions taken on languagedependent issues were presented, and some ideas for discussion on the universal guidelines were also proposed. If these ideas are shared by annotators from other languages, it could be interesting to take a further look at them for future editions.
7 .sent inst-file1 inst-file2 mwe-fscore kappa kappa-cat
871
327
355
0.86
0.82
0.86
Table 1: IAA scores
sentences tokens MWEs LVC.cause LVC.full VID11,158
157,807 3,823
183
2,866
774
Table 2 :
2Data about the final Basque VMWE corpus4 Difficult language-dependent cases
Ez zuen utzi bere aria. not AUX.3PS leave his/her practice-the.ABS 'He did not stop doing what he was doing.'AUX.3PP (...) his/her deserved-the.ABS
'They gave him what he deserved.'
(18)
Table 3
3makes this fact obvious. It collects the ratio of LVCs and VMWEs per sentence in the Basque corpus, as well as the average ratio of the whole ST corpus (20 languages in all) and the ratios for Spanish, French and English corpora 12 , the three languages which affect Basque the most. In order to make comparisons properly, only the three universal categories were taken into account, even if all except Basque include other categories as well. From the languages included in the ST, only Farsi and Hindi have a higher number of LVCs per 100 sentences (95 and 40 respectively).VMWEs per 100 sentences LVCs per 100 sentences
Basque
34
27
Average
18
11
French
20
9
Spanish
15
9
English
6
4
Table 3 :
3Average frequencies of tags in Basque, Spanish, French and English
http://webcorpusak.elhuyar.eus/ 3 https://gitlab.com/parseme/utilities/tree/master/1.1 4 http://parsemefr.lif.univ-mrs.fr/parseme-st-guidelines/1.1/?page=home 5 Explanations for glosses in examples: ABS → absolutive case; ADV → adverb; AUX → auxiliary verb; BARE → bare noun; FUT → future; LOC → locative postposition; 1PS/3PS → 1st/3rd person singular; 3PP → 3rd person plural. 6 http://flat.readthedocs.io/en/latest/
Meaning of the table columns: sent = sentence; inst-file1 = instances annotated by one of the annotators; inst-file2 = instances annotated by the other two annotators; mwe-fscore = F score for MWEs; kappa = kappa score for VMWEs annotated; kappa-cat = kappa score for VMWE categories. More details on how scores were calculated are given in(Savary et al., 2018).
Note that -ko and -go are allomorphs of the same morpheme (due to phonemic context). 9 Example (15) is probably a loan translation, as this is the way the idea of being hungry is expressed in Spanish and French, the main languages sharing territory with Basque. This usage is more recent and, according to some speakers, it is not as 'proper' as the first one. However, it is more and more common in real corpora and, thus, it must be considered.
www.euskaltzaindia.eus 11 For clarity, examples were re-written following current ortographical rules.
Corpora for all languages can be accessed here: https://gitlab.com/parseme/sharedtask-data/tree/ master/1.1 13 When calculating this number, non-verbal elements of LVCs which could be either a noun or an adjective (see Section 4.3) were counted as nouns.
Automatic conversion of the Basque dependency treebank to universal dependencies. Maria Jesus Aranzabe, Aitziber Atutxa, Kepa Bengoetxea, Proceedings of the Workshop on Treebanks and Linguistic Theories. the Workshop on Treebanks and Linguistic TheoriesArantza Díaz de Ilarraza, Koldo Gojenola and Larraitz Uria. TLT 2015Maria Jesus Aranzabe, Aitziber Atutxa, Kepa Bengoetxea, Arantza Díaz de Ilarraza, Koldo Gojenola and Lar- raitz Uria. 2015. Automatic conversion of the Basque dependency treebank to universal dependencies. In Proceedings of the Workshop on Treebanks and Linguistic Theories (TLT 2015), 233-241.
Automatic morphological analysis of Basque. Xabier Iñaki Alegria, Kepa Artola, Miriam Sarasola, Urkia, Literary and Linguistic Computing. 11Iñaki Alegria, Xabier Artola, Kepa Sarasola, and Miriam Urkia. 1996. Automatic morphological analysis of Basque. In Literary and Linguistic Computing, 11(4):193-203.
Representation and treatment of Multiword Expressions in Basque. Olatz Iñaki Alegria, Xabier Ansa, Nerea Artola, Koldo Ezeiza, Ruben Gojenola, Urizar, Proceedings of the Workshop on Multiword Expressions: Integrating Processing. the Workshop on Multiword Expressions: Integrating ProcessingAssociation for Computational LinguisticsIñaki Alegria, Olatz Ansa, Xabier Artola, Nerea Ezeiza, Koldo Gojenola and Ruben Urizar. 2004. Representation and treatment of Multiword Expressions in Basque. In Proceedings of the Workshop on Multiword Expressions: Integrating Processing, 48-55. Association for Computational Linguistics.
Manual de fraseología española. Gloria Corpas Pastor, Editorial GredosGloria Corpas Pastor. 1997. Manual de fraseología española. Editorial Gredos.
Valency and argument structure in the Basque verb. Ricardo Etxepare , Jose Ignacio Hualde and Jon Ortiz de UrbinaMouton de GruyterA grammar of BasqueRicardo Etxepare. 2003. Valency and argument structure in the Basque verb. In Jose Ignacio Hualde and Jon Ortiz de Urbina (eds.) A grammar of Basque. Mouton de Gruyter.
Automatic extraction of NV expressions in Basque: basic issues on cooccurrence techniques. Antton Gurrutxaga, Iñaki Alegria, Proceedings of the Workshop on Multiword Expressions: from parsing and generation to the real world. the Workshop on Multiword Expressions: from parsing and generation to the real worldAssociation for Computational LinguisticsAntton Gurrutxaga and Iñaki Alegria. 2011. Automatic extraction of NV expressions in Basque: basic issues on cooccurrence techniques. In Proceedings of the Workshop on Multiword Expressions: from parsing and generation to the real world, 2-7. Association for Computational Linguistics.
Combining different features of idiomaticity for the automatic classification of noun+verb expressions in Basque. Antton Gurrutxaga, Iñaki Alegria, Proceedings of the 9th Workshop on Multiword Expressions. the 9th Workshop on Multiword ExpressionsUniversity of the Basque CountryAntton Gurrutxaga and Iñaki Alegria. 2013. Combining different features of idiomaticity for the automatic clas- sification of noun+verb expressions in Basque. In Proceedings of the 9th Workshop on Multiword Expressions, 116-125. University of the Basque Country.
Rule-based translation of Spanish Verb-Noun combinations into Basque. Uxoa Inurrieta, Itziar Aduriz, Proceedings of the 13th Workshop on Multiword Expressions. the 13th Workshop on Multiword ExpressionsAssociation for Computational LinguisticsArantza Díaz de Ilarraza, Gorka Labaka and Kepa SarasolaUxoa Inurrieta, Itziar Aduriz, Arantza Díaz de Ilarraza, Gorka Labaka and Kepa Sarasola. 2017. Rule-based translation of Spanish Verb-Noun combinations into Basque. In Proceedings of the 13th Workshop on Multiword Expressions, in EACL 2017, 149-154. Association for Computational Linguistics.
Analysing linguistic information about word combinations for a Spanish-Basque rule-based machine translation system. Uxoa Inurrieta, Itziar Aduriz, Arantza Díaz De Ilarraza, Gorka Labaka, Kepa Sarasola, Multiword Units in Machine Translation and Translation Technologies. Ruslan Mitkov, Johanna Monti, Gloria Corpas Pastor and Violeta SeretanJohn Benjamins publishing companyin printUxoa Inurrieta, Itziar Aduriz, Arantza Díaz de Ilarraza, Gorka Labaka and Kepa Sarasola. 2018 (in print). Analysing linguistic information about word combinations for a Spanish-Basque rule-based machine transla- tion system. In Ruslan Mitkov, Johanna Monti, Gloria Corpas Pastor and Violeta Seretan (eds.), Multiword Units in Machine Translation and Translation Technologies, 39-60. John Benjamins publishing company.
Orotariko Euskal Hiztegia. Euskaltzaindia, the Royal Academy of the Basque language. Koldo Mitxelena, Koldo Mitxelena. 1987. Orotariko Euskal Hiztegia. Euskaltzaindia, the Royal Academy of the Basque language.
A brief grammar of Euskera, the Basque language. Mugarza Itziar Laka, University of the Basque CountryItziar Laka Mugarza. 1996. A brief grammar of Euskera, the Basque language. University of the Basque Country.
PARSEME survey on MWE resources. Federico Gyri Smørdal Losnegaard, Carla Sangati, Agata Parra Escartín, Sascha Savary, Johanna Bargmann, Monti, 9th International Conference on Language Resources and Evaluation (LREC 2016. European Association for Language ResourcesGyri Smørdal Losnegaard, Federico Sangati, Carla Parra Escartín, Agata Savary, Sascha Bargmann and Johanna Monti. 2016. PARSEME survey on MWE resources. In 9th International Conference on Language Resources and Evaluation (LREC 2016), 2299-2306. European Association for Language Resources.
Multiword expressions: a pain in the neck for NLP. A Ivan, Timothy Sag, Francis Baldwin, Ann Bond, Dan Copestake, Flickinger, International Conference on Intelligent Text Processing and Computational Linguistics. SpringerIvan A Sag, Timothy Baldwin, Francis Bond, Ann Copestake, and Dan Flickinger. 2002. Multiword expressions: a pain in the neck for NLP. In International Conference on Intelligent Text Processing and Computational Linguistics, 1-15. Springer.
PARSEME-PARSing and Multiword Expressions within a European multilingual network. Agata Savary, Manfred Sailer, Yannick Parmentier, Michael Rosner, Victoria Rosén, Adam Przepiórkowski, Cvetana Krstev, Veronika Vincze, Beata Wójtowicz, 7th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics. LTCGyri Smørdal Losnegaard, and othersAgata Savary, Manfred Sailer, Yannick Parmentier, Michael Rosner, Victoria Rosén, Adam Przepiórkowski, Cve- tana Krstev, Veronika Vincze, Beata Wójtowicz, Gyri Smørdal Losnegaard, and others. 2015. PARSEME- PARSing and Multiword Expressions within a European multilingual network. In 7th Language & Technology Conference: Human Language Technologies as a Challenge for Computer Science and Linguistics (LTC 2015).
The PARSEME Shared Task on automatic identification of Verbal Multiword Expressions. Agata Savary, Carlos Ramisch, Silvio Cordeiro, Federico Sangati, Veronika Vincze, Behrang Qasemizadeh, Marie Candito, Proceedings of the 13th Workshop on Multiword Expressions. the 13th Workshop on Multiword ExpressionsAssociation for Computational LinguisticsFabienne Cap, Voula Giouli, Ivelina Stoyanova and othersAgata Savary, Carlos Ramisch, Silvio Cordeiro, Federico Sangati, Veronika Vincze, Behrang QasemiZadeh, Marie Candito, Fabienne Cap, Voula Giouli, Ivelina Stoyanova and others. 2017. The PARSEME Shared Task on automatic identification of Verbal Multiword Expressions. In Proceedings of the 13th Workshop on Multiword Expressions, in EACL 2017, 31-47. Association for Computational Linguistics.
Edition 1.1 of the PARSEME Shared Task on automatic identification of Verbal Multiword Expressions. Agata Savary, Carlos Ramisch, Silvio Cordeiro, Proceedings of the 14th Workshop on Multiword Expressions. the 14th Workshop on Multiword ExpressionsAssociation for Computational LinguisticsVeronika Vincze and othersAgata Savary, Carlos Ramisch, Silvio Cordeiro, Veronika Vincze and others. 2018. Edition 1.1 of the PARSEME Shared Task on automatic identification of Verbal Multiword Expressions. In Proceedings of the 14th Workshop on Multiword Expressions, in COLING 2018. Association for Computational Linguistics.
Euskal lokuzioen tratamendu konputazionala. Ruben Urizar, University of the Basque CountryRuben Urizar. 2012. Euskal lokuzioen tratamendu konputazionala. University of the Basque Country.
Los predicados complejos en vasco. Unzalu Igone Zabala, Las fronteras de la composicin en lenguas romnicas y en vasco. Universidad de DeustoIgone Zabala Unzalu. 2004. Los predicados complejos en vasco. In Las fronteras de la composicin en lenguas romnicas y en vasco, 445-534. Universidad de Deusto. |
243,864,612 | SemLink 2: Chasing Lexical Resources | The SemLink resource provides mappings between a variety of lexical semantic ontologies, each with their strengths and weaknesses. To take advantage of these differences, the ability to move between resources is essential. This work describes advances made to improve the usability of the SemLink resource: the automatic addition of new instances and mappings, manual corrections, sense-based vectors and collocation information, and architecture built to automatically update the resource when versions of the underlying resources change. These updates improve coverage, provide new tools to leverage the capabilities of these resources, and facilitate seamless updates, ensuring the consistency and applicability of these mappings in the future. 1 | [
8716224,
3626819,
53048550,
201681771,
1957433,
21699285,
11616343,
53101497,
5959482,
216867421
] | SemLink 2: Chasing Lexical Resources
June 17-18, 2021
Kevin Stowe
Department of Computer Science
Ubiquitous Knowledge Processing Lab (UKP Lab)
Technical University of Darmstadt
Jenette Preciado
SoundHound
BoulderColorado
Kathryn Conger
University of Colorado
Boulder
Susan Brown
University of Colorado
Boulder
Ghazaleh Kazeminejad
University of Colorado
Boulder
James Gung
University of Colorado
Boulder
Martha Palmer
University of Colorado
Boulder
SemLink 2: Chasing Lexical Resources
Proceedings of the 14th International Conference on Computational Semantics
the 14th International Conference on Computational SemanticsJune 17-18, 2021222
The SemLink resource provides mappings between a variety of lexical semantic ontologies, each with their strengths and weaknesses. To take advantage of these differences, the ability to move between resources is essential. This work describes advances made to improve the usability of the SemLink resource: the automatic addition of new instances and mappings, manual corrections, sense-based vectors and collocation information, and architecture built to automatically update the resource when versions of the underlying resources change. These updates improve coverage, provide new tools to leverage the capabilities of these resources, and facilitate seamless updates, ensuring the consistency and applicability of these mappings in the future. 1
Introduction
Hand-crafted lexical resources remain an important factor in natural language processing research, as they can offer linguistic insights that are currently not captured even by modern deep learning techniques. SemLink is a connecting point between a number of different lexical semantic resources, providing mappings between different word senses and semantic roles, as well as a corpus of annotation (Palmer, 2009). SemLink has a variety of applications, from performing linguistic analysis of its component parts and their relations (Reisinger et al., 2015), extracting thematic role hierarchies (Kuznetsov and Gurevych, 2018), probing of linguistic formalisms (Kuznetsov and Gurevych, 2020), and computational methods for automatic extraction, improvement, and classification of computational lexical resources (Kawahara et al., 2014;Peterson et al., 2016Peterson et al., , 2020. 1 https://github.com/cu-clear/semlink SemLink incorporates four different lexical resources: PropBank (Palmer and Kingsbury, 2005), VerbNet (Kipper-Schuler, 2005), FrameNet (Baker and Lowe, 1998), and WordNet via the OntoNotes sense groupings (Weischedel et al., 2011). 2 Each resource has different goals and benefits: WordNet has the greatest coverage, with very fine-grained word senses grouped into small "synonym sets". These are linked to each other with semantic relations like hyponymy and troponymy. PropBank defines the argument roles for its verb and eventive noun senses, information not available in WN. FrameNet groups verbs, eventive nouns and some adjectives into semantic frames, with fine-grained argument roles defined for each frame. These frames are linked by various relations, such as "inherited by" and "used by". VerbNet groups verbs into more or less semantically coherent classes based on shared syntactic alternations. This resource uses fairly coarse-grained argument roles and provides a list of typical syntactic patterns that the verbs of a class prefer. In addition, VN provides a semantic representation for each syntactic frame, using the class's argument roles in a first-orderlogic representation that incorporates Generative Lexicon subevent structure.
Semlink provides a bridge between these resources, allowing users to take advantage of their different features and strengths. For example, the mappings between the semantic role labels allow users to accurately convert annotations done with PB roles to VN roles and combine their respective data sets into a much larger corpus of training and test data.
The goal of SemLink is to link senses between resources, maximizing the effectiveness of each. It is composed of two primary assets: mappings between resources, and a corpus of annotated instances. These are verbs in context that receive a PB roleset annotation, and VN class tag, a FN frame tag, and a sense tag based on the ON groupings.
The problem we address here is the constantly changing nature of these resources. They are evolving: new versions incorporate new semantics, new senses, better lexical coverage, and more consistent formatting. This makes it difficult to provide static links between them. SemLink has seen previous updates (Bonial et al., 2013) that improve consistency, but since that time many of the resources it links have undergone significant overhauls. Our work updates SemLink via four distinct contributions:
1. Automatic and manual updates to SemLink mappings based on new resource versions 2. Automatic addition of SemLink annotation instances, nearly doubling its size 3. Addition of sense embeddings and subject/object information 4. Release of software for automatic updates
Resources
A brief description of each resource in SemLink follows, along with the changes in each that have been implemented since the previous update.
PropBank
The previous version of SemLink incorporated PB annotation in the form of roleset mappings to VN classes and FN frames. It also contains gold annotation over sections of the Wall Street Journal corpus, with verbs annotated with their PB roleset. Each verb's arguments are annotated with their correct PB argument relations. These PB rolesets, mappings, and annotations remain core elements of SemLink, and we have expanded and updated each component for SemLink 2.0.
VerbNet
SemLink incorporates VN as an intermediary between the coarse-grained PB and fine-grained FN. Mapping files are provided that link PB rolesets to VN senses, which are then in turn linked to FN frames. The previous version of SemLink was built upon VN 3.2: this resource has since been updated to a new version (3.3), with substantial changes in class membership, thematic roles (Bonial et al., 2011), and semantics (Brown et al., 2018(Brown et al., , 2019. We have incorporated these changes into SemLink 2.0 automatically where possible and manually where necessary.
FrameNet
The previous version of SemLink was built upon FN version 1.5; since then FN has released a new version (1.7), and this led to many consistency errors across resources. SemLink 2.0 provides manual updates to match the newest version of FN, as well as other consistency improvements.
OntoNotes Sense Groupings
The SemLink resource focuses less on these groupings than on PB, VN, and FN: it only includes ON as annotations on the provided instances. The ON resource has remained consistent since the release of the previous SemLink version, and thus the instance annotations remain valid.
Improvements and Additions
SemLink incorporates these resources via mapping files (for PB, VN, and FN) and predicate instance annotations (including all four resources). We will now overview each of these artifacts, highlighting the updates in our new release and the tools and practices used to generate these updates.
PB to VN mappings
The previous version of SemLink contains two files comprising the mappings from PB to VN: a mapping file that links PB rolesets to VN senses, and a mapping file linking PB arguments (ARG0, ARG1, etc) to VN thematic roles (Agent, Patient, etc). These files contain a growing number of inaccuracies as the resources have been updated, particularly with PB's update to unified frame files and VN's update to the version 3.3.
To deal with these constant updates, we've improved the system that automatically generates these mapping files based on ground-truth mappings present in PB. The PB frame files contain links from each roleset to possible VN classes: this allowed us to generate a large number of accurate mappings based purely on the information present in PB. The main update to this architecture is the development of VN class matching. We can now find if verbs have moved between classes, allowing the automated updater to find more valid instances. This system incorporates soft class matching for when verbs moved between VN subclasses, as well as exploiting available WordNet mappings in VN to identify if a verb moved to a new class.
The mappings generated by this system are not exhaustive: the ever-changing nature of the two projects makes it impossible to have all possible mappings. One of the primary goals of SemLink is to ensure that the most consistent possible mappings between resources is available, and our update helps to foster this consistency by making available our software for updating and evaluating the accuracy of these mappings. This is done by automatically generating mappings from PB to VN based on PB frame files, combining them with the previous version of manual mappings, and checking both of these mappings for consistency.
This process produces an update mapping resource from PB to VN. While these mappings don't eliminate the need for some manual annotation, as substantive changes can require new mappings to be added or deleted, it does allow the resource to be consistently and automatically updated while preserving only valid mappings.
VN to FN mappings
SemLink contains similar mapping files from VN to FN: one mapping from VN senses to FN frames, and one mapping from VN thematic roles to FN's typically more specific frame elements. As with PB and VN, FN has seen a significant update (to version 1.7) since the previous SemLink release, and these mappings files have become outdated.
Unlike PB, neither VN nor FN implicitly keeps track of mappings to the other resource: the only linking between them is in SemLink's mapping files. Therefore, for these files, we employed a semi-automated system to identify incorrect mappings and make updates. We run a script to identify whether VN class/role and FN frame/frame elements are valid. This is done by checking if the classes, roles, frames and frame elements still exist in the current version of the resource, and then checking if the roles and frame elements are still valid for the given classes and frames. We then pass them to annotators if there are errors. This was done for all of the mappings in the previous version, yielding 2,387 valid mappings, 160 of which came from manual re-annotation. These mappings were then compiled to form the new VN to FN mapping file for SemLink 2.0.
For both PB to VN and VN to FN mappings, we employed automatic procedures that allowed us to update outdated SemLink instances to match the current resources. However, these updates are necessarily not comprehensive: we only updated instances for which we could identify automatic mappings between old and new. If the resources changed in unpredictable ways (ie. a sense tag changed itself changed meanings), these mappings may still be inconsistent. We therefore include for each instance in SemLink 2.0 and indicator for each mapping whether it was derived from an automatic procedure or manually annotated.
Annotations
The second artifact produced for SemLink is a set of annotations. These consist of predicates annotated with PB frames, VN senses, FN frames, ON groupings, and each resource's representation of the predicates' arguments. An example of an annotation instance is shown in Figure 1.
Updates to Previous Annotations
All instances underwent an automatic update process based on our revision of mapping resources. The sense tags for each resource are validated, and automatically updated via mappings if errors are found. This process is repeated for role arguments. This was done for the 74,920 instances available with the previous SemLink. In order to keep the resource as large and as flexible as possible, as long as an instance had a PB roleset, we didn't remove instances with invalid mappings: rather, we kept these instances and left the additional information (VN, FN, etc) as "None". This allows us to maintain the size of the resource and while preserving only the accurate annotations.
New Annotations
In addition to updating the previous annotations, we were also able to leverage additional annotation projects to expand the scope of the SemLink resource. We gathered 72,822 additional instances from the OntoNotes 5.0 release annotated with the unified PB rolesets (Weischedel et al., 2011), and employed our updated mapping files to automatically attribute VN and FN information to them. We also collected 5,300 instances that were manually Figure 1: SemLink annotation instance for the verb "ringing" in the above sentence. annotated with VN classes (Palmer et al., 2017), and extracted PB and FN information from these based on mapping files.
Similar to the updates above, we automatically check these instances to determine if their annotations were valid (the class, sense, or frame still exists) in the modern versions of each resource. and then added them to SemLink's annotation corpus. A summary of the update to the annotations is shown in Table 1.
From this summary we can see substantial improvements to the dataset across all resources, with the greatest impact coming from the new annotations. However, as we automatically add instances based on PB and VN annotation, they often lack mappings to the other resources. This, combined with the fact that some VN and FN annotations were removed due to inconsistency with the latest versions, leads to a decrease in the percent of instances tagged with each particular resource, despite the increase in total annotations.
VN Tools
In order to ensure the applicability of these mappings and lexical resources, we include two additional components: sense embeddings and common arguments. These are based on VN, as it directly links to PB and FN.
VN Embeddings
We train embeddings based on VN in a style similar to that of (Sikos and Padó, 2018). We tag a corpus of 4.5m sentences from Wikipedia with a VN class tagger (Palmer et al., 2017). We then learn embeddings for both VN classes and specific VN senses by modifying the resulting corpora. First, to generate generic VN class embeddings, we replace the verb directly with its labeled class. This allows the embedding model to learn a representation that generalizes over all instances of a particular VN class, and provides an abstraction away from the individual lexical items. Second, to generate sense-specific word embeddings, we concatenate the class information along with the verb. This yields more specific embeddings that concretely reflect contextual usages of the given verb. The resulting sentences can then be fed to a lexical embedding algorithm of choice: here we use GloVe (Pennington et al., 2014) and Word2Vec (Mikolov et al., 2013) embeddings of size 100.
These embeddings have proven an effective addition to traditional embeddings for classification tasks, and even have advantages over contextual embeddings. Stowe (2019) show that incorporating VN-based sense embeddings into LSTMbased metaphor detection improves results over using ELMo embeddings alone, despite the fact that the contextualized ELMo embeddings should independently capture sense information (Peters et al., 2018). 3 These methods for learning embeddings are broadly applicable to any lexical resource, and are adaptable to changing versions; the embeddings provided are trained using VN 3.3, and as we provide links from VN to PB and FN, we further believe that the accompanying embeddings can be directly linked to these two resources.
VN Common Arguments
In addition to embeddings, we also collect argument information based on VN class tagging. We collect for each class the most frequent subjects and objects of verbs tagged with that class. This is done by tagging the above Wikipedia corpus with VN classes, then using a dependency parser to extract subject and object information (Chen and Manning, 2014). This automated procedure does inherently introduce noise, but it allows us to form a general idea of kind of arguments that typify the semantic roles and to better understand the syntactic and collocational properties of verb classes. Practitioners who are researching verb classes can use these to better understand from a quantitative perspective what kinds of subjects and objects are likely to ap-pear with given verb classes, further facilitating research into lexical semantics.
Software
In order to manage these updates, we've built a substantial number of infrastructure components to support the interaction between these resources. This includes interfaces to each resource, to Sem-Link, and tools for making automatic updates based on different versions. The SemLink scripts have the flexibility to use and compare various different versions of each resource; this allows us to quickly update SemLink to new versions. This software will be released along with the new version via GitHub, with the hope that the community can maintain and improve its functionality as necessary, and to allow researchers to be able to easily interact with both the resources linked and the SemLink resource itself. Critically, this resource will mitigate the damage of future changes to each individual resource, as SemLink can painlessly be updated to accommodate new versions.
Conclusions and Future Work
Our updates to SemLink consist of four main components. (1) We update SemLink data to match the current versions of each resource through automatic and manual methods. (2) We add annotations to improve the coverage of the resource. (3) We add sense embeddings and argument information.
(4) We provide automatic tools to allow the Sem-Link resource to be consistently updated. As these lexical resources are always changing, these tools are necessary for the resource to remain viable, and while the process of linking semantic resources can likely never be fully automated, these tools can assist in this process. This work then comes with two artifacts: the new SemLink resource (mapping files and annotations) as well as architecture for updating and managing SemLink.
The coverage is by no means complete and many lexical items in each resource contain no viable mappings. Manual annotation of links between resources is essential for the success of the Sem-Link resource: while we can automatically filter out inaccurate mappings when resources change, this leaves blind spots where we have incomplete mappings, and manual annotation is currently the most accurate way to cover these gaps.
Another direction of future work is evaluating the usefulness of these linked resources. While there have been evaluations comparing the three semantic role labelling frameworks provided via PB, VN, and FN (Hartmann et al., 2017), a fullscale evaluation of the links between them is yet to be done, and may provide valuable insight not only into how to best improve SemLink, but also into how these kinds of linked resources can be best employed. While modern NLP focuses largely around end-to-end models that implicitly capture semantic relations, there is still a role for handcurated lexical resources to play, and we believe SemLink can be an effective resource for those studying computational lexical semantics, word sense disambiguation and semantic role labelling, and other tasks requiring linked lexical resources.
Acknowledgements
We gratefully acknowledge the support of DTRAl-16-1-0002/Project 1553695, eTASC -Empirical Evidence for a Theoretical Approach to Semantic Components and DARPA 15-18-CwC-FP-032 Communicating with Computers, C3 Cognitively Coherent Human-Computer Communication (sub from UIUC) and Elementary Composable Ideas (ECI) Repository (sub from SIFT), and DARPA FA8750-18-2-0016-AIDA -RAMFIS: Representations of vectors and Abstract Meanings for Information Synthesis. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA, DTRA, or the U.S. government.
Table 1 :
1Summary of Annotation Updates to SemLink
For the remainder of this work, we will refer to each by its acronym: PB, VN, FN, and ON, respectively.
Note that these results are from embeddings trained on VN version 3.2; they have since been updated to version 3.3
The Berkeley FrameNet project. C J Fillmore, C F Baker, J B Lowe, Montreal, QC. COLING-ACL '98. Fillmore C.J. Baker, C. F. and J.B. Lowe. 1998. The Berkeley FrameNet project. pages 86-90, Montreal, QC. COLING-ACL '98.
A hierarchical unification of lirics and verbnet semantic roles. Claire Bonial, William Corvey, Martha Palmer, V Volha, Harry Petukhova, Bunt, 2011 IEEE Fifth International Conference on Semantic Computing. IEEEClaire Bonial, William Corvey, Martha Palmer, Volha V Petukhova, and Harry Bunt. 2011. A hi- erarchical unification of lirics and verbnet semantic roles. In 2011 IEEE Fifth International Conference on Semantic Computing, pages 483-489. IEEE.
Renewing and revising SemLink. Claire Bonial, Kevin Stowe, Martha Palmer, Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, terminologies and other language data. the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, terminologies and other language dataPisa, ItalyAssociation for Computational LinguisticsClaire Bonial, Kevin Stowe, and Martha Palmer. 2013. Renewing and revising SemLink. In Proceedings of the 2nd Workshop on Linked Data in Linguistics (LDL-2013): Representing and linking lexicons, ter- minologies and other language data, pages 9 -17, Pisa, Italy. Association for Computational Linguis- tics.
Verbnet representations: Subevent semantics for transfer verbs. Julia Susan Windisch Brown, James Bonn, Annie Gung, James Zaenen, Martha Pustejovsky, Palmer, Proceedings of the First International Workshop on Designing Meaning Representations. the First International Workshop on Designing Meaning RepresentationsSusan Windisch Brown, Julia Bonn, James Gung, An- nie Zaenen, James Pustejovsky, and Martha Palmer. 2019. Verbnet representations: Subevent semantics for transfer verbs. In Proceedings of the First Inter- national Workshop on Designing Meaning Represen- tations, pages 154-163.
Integrating generative lexicon event structures into verbnet. James Susan Windisch Brown, Annie Pustejovsky, Martha Zaenen, Palmer, Proceedings of the Eleventh International Conference on Language Resources and Evaluation. the Eleventh International Conference on Language Resources and EvaluationLRECSusan Windisch Brown, James Pustejovsky, Annie Za- enen, and Martha Palmer. 2018. Integrating gener- ative lexicon event structures into verbnet. In Pro- ceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC- 2018).
A fast and accurate dependency parser using neural networks. Danqi Chen, Christopher Manning, 10.3115/v1/D14-1082Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsDanqi Chen and Christopher Manning. 2014. A fast and accurate dependency parser using neural net- works. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 740-750, Doha, Qatar. Association for Computational Linguistics.
Assessing SRL frameworks with automatic training data expansion. Silvana Hartmann, Éva Mújdricza-Maydt, Ilia Kuznetsov, Iryna Gurevych, Anette Frank, 10.18653/v1/W17-0814Proceedings of the 11th Linguistic Annotation Workshop. the 11th Linguistic Annotation WorkshopValencia, SpainAssociation for Computational LinguisticsSilvana Hartmann,Éva Mújdricza-Maydt, Ilia Kuznetsov, Iryna Gurevych, and Anette Frank. 2017. Assessing SRL frameworks with automatic training data expansion. In Proceedings of the 11th Linguistic Annotation Workshop, pages 115-121, Valencia, Spain. Association for Computational Linguistics.
A step-wise usage-based method for inducing polysemy-aware verb classes. Daisuke Kawahara, Daniel W Peterson, Martha Palmer, 10.3115/v1/P14-1097Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational LinguisticsBaltimore, Maryland1Association for Computational LinguisticsDaisuke Kawahara, Daniel W. Peterson, and Martha Palmer. 2014. A step-wise usage-based method for inducing polysemy-aware verb classes. In Proceed- ings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Pa- pers), pages 1030-1040, Baltimore, Maryland. As- sociation for Computational Linguistics.
VerbNet: A broad-coverage, comprehensive verb lexicon. K Kipper, - Schuler, K Kipper-Schuler. 2005. VerbNet: A broad-coverage, comprehensive verb lexicon.
Corpusdriven thematic hierarchy induction. Ilia Kuznetsov, Iryna Gurevych, 10.18653/v1/K18-1006Proceedings of the 22nd Conference on Computational Natural Language Learning. the 22nd Conference on Computational Natural Language LearningBrussels, BelgiumAssociation for Computational LinguisticsIlia Kuznetsov and Iryna Gurevych. 2018. Corpus- driven thematic hierarchy induction. In Proceed- ings of the 22nd Conference on Computational Natu- ral Language Learning, pages 54-64, Brussels, Bel- gium. Association for Computational Linguistics.
A matter of framing: The impact of linguistic formalism on probing results. Ilia Kuznetsov, Iryna Gurevych, 10.18653/v1/2020.emnlp-main.13Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsIlia Kuznetsov and Iryna Gurevych. 2020. A matter of framing: The impact of linguistic formalism on prob- ing results. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 171-182, Online. Association for Computational Linguistics.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, abs/1301.3781CoRR. Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word represen- tations in vector space. CoRR, abs/1301.3781.
The Proposition Bank: An Annotated Corpus of Semantic Roles. D Gildea, M Palmer, P Kingsbury, 31Gildea D. Palmer, M. and P. Kingsbury. 2005. The Proposition Bank: An Annotated Corpus of Seman- tic Roles. volume 31, pages 71-106.
Semlink: Linking PropBank, Verb-Net and FrameNet. M Palmer, Proceedings of the Generative Lexicon Conference. the Generative Lexicon ConferencePisa, ItalyM. Palmer. 2009. Semlink: Linking PropBank, Verb- Net and FrameNet. Pisa, Italy. Proceedings of the Generative Lexicon Conference.
The Pitfalls of Shortcuts: Tales from the word sense tagging trenches. Martha Palmer, James Gung, Claire Bonial, Jinho Choi, Orin Hargraves, Derek Palmer, Kevin Stowe, Essays in Lexical Semantics and Computational Lexicography -In honor of Adam Kilgarriff. Martha Palmer, James Gung, Claire Bonial, Jinho Choi, Orin Hargraves, Derek Palmer, and Kevin Stowe. 2017. The Pitfalls of Shortcuts: Tales from the word sense tagging trenches. Essays in Lexical Seman- tics and Computational Lexicography -In honor of Adam Kilgarriff.
GloVe: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher Manning, 10.3115/v1/D14-1162Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Doha, QatarAssociation for Computational LinguisticsJeffrey Pennington, Richard Socher, and Christopher Manning. 2014. GloVe: Global vectors for word representation. In Proceedings of the 2014 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543, Doha, Qatar. Association for Computational Linguistics.
Deep contextualized word representations. Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, 10.18653/v1/N18-1202Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Matthew Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 2227-2237, New Orleans, Louisiana. Association for Computational Linguistics.
Leveraging VerbNet to build corpus-specific verb clusters. Daniel Peterson, Jordan Boyd-Graber, Martha Palmer, Daisuke Kawahara, 10.18653/v1/S16-2012Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics. the Fifth Joint Conference on Lexical and Computational SemanticsBerlin, GermanyAssociation for Computational LinguisticsDaniel Peterson, Jordan Boyd-Graber, Martha Palmer, and Daisuke Kawahara. 2016. Leveraging VerbNet to build corpus-specific verb clusters. In Proceed- ings of the Fifth Joint Conference on Lexical and Computational Semantics, pages 102-107, Berlin, Germany. Association for Computational Linguis- tics.
Verb class induction with partial supervision. Daniel Peterson, Susan Brown, Martha Palmer, Proceedings of the Thirty-fourth AAAI Conference on Artificial Intelligence. the Thirty-fourth AAAI Conference on Artificial IntelligenceNew York City, NYDaniel Peterson, Susan Brown, and Martha Palmer. 2020. Verb class induction with partial supervision. In Proceedings of the Thirty-fourth AAAI Confer- ence on Artificial Intelligence, New York City, NY.
Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, Benjamin Van Durme, 10.1162/tacl_a_00152Semantic proto-roles. Transactions of the Association for Computational Linguistics. 3Drew Reisinger, Rachel Rudinger, Francis Ferraro, Craig Harman, Kyle Rawlins, and Benjamin Van Durme. 2015. Semantic proto-roles. Transac- tions of the Association for Computational Linguis- tics, 3:475-488.
Using embeddings to compare FrameNet frames across languages. Jennifer Sikos, Sebastian Padó, Proceedings of the First Workshop on Linguistic Resources for Natural Language Processing. the First Workshop on Linguistic Resources for Natural Language ProcessingSanta Fe, New Mexico, USAAssociation for Computational LinguisticsJennifer Sikos and Sebastian Padó. 2018. Using em- beddings to compare FrameNet frames across lan- guages. In Proceedings of the First Workshop on Linguistic Resources for Natural Language Process- ing, pages 91-101, Santa Fe, New Mexico, USA. As- sociation for Computational Linguistics.
Syntactic and semantic improvements to computational metaphor processing. Kevin Stowe, Kevin Stowe. 2019. Syntactic and semantic improve- ments to computational metaphor processing.
OntoNotes: A Large Training Corpus for Enhanced Processing. Handbook of Natural Language Processing and Machine Translation: Global Automatic Language Exploitation. R Weischedel, E Hovy, M Marcus, M Palmer, R Belvin, S Pradan, L Ramshaw, X Nianwen, R. Weischedel, E. Hovy, M. Marcus, M. Palmer, R. Belvin, S. Pradan, L. Ramshaw, and X. Nianwen. 2011. OntoNotes: A Large Training Corpus for En- hanced Processing. Handbook of Natural Language Processing and Machine Translation: Global Auto- matic Language Exploitation, pages 53-63. |
1,421,908 | Recognizing Implicit Discourse Relations in the Penn Discourse Treebank | We present an implicit discourse relation classifier in the Penn Discourse Treebank (PDTB). Our classifier considers the context of the two arguments, word pair information, as well as the arguments' internal constituent and dependency parses. Our results on the PDTB yields a significant 14.1% improvement over the baseline. In our error analysis, we discuss four challenges in recognizing implicit relations in the PDTB. | [
7859072,
1157793,
12636832,
13374927,
3102322,
15893207,
210363
] | Recognizing Implicit Discourse Relations in the Penn Discourse Treebank
August 2009. 2009
Ziheng Lin linzihen@comp.nus.edu.sg
Department of Computer Science
National University of Singapore
13 Computing Drive117417Singapore
Min-Yen Kan
Department of Computer Science
National University of Singapore
13 Computing Drive117417Singapore
Hwee Tou Ng
Department of Computer Science
National University of Singapore
13 Computing Drive117417Singapore
Recognizing Implicit Discourse Relations in the Penn Discourse Treebank
ACL and AFNLP
the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeAugust 2009. 2009
We present an implicit discourse relation classifier in the Penn Discourse Treebank (PDTB). Our classifier considers the context of the two arguments, word pair information, as well as the arguments' internal constituent and dependency parses. Our results on the PDTB yields a significant 14.1% improvement over the baseline. In our error analysis, we discuss four challenges in recognizing implicit relations in the PDTB.
Introduction
In the field of discourse modeling, it is widely agreed that text is not understood in isolation, but in relation to its context. One focus in the study of discourse is to identify and label the relations between textual units (clauses, sentences, or paragraphs). Such research can enable downstream natural language processing (NLP) such as summarization, question answering, and textual entailment. For example, recognizing causal relations can assist in answering why questions. Detecting contrast and restatements is useful for paraphrasing and summarization systems. While different discourse frameworks have been proposed from different perspectives (Mann and Thompson, 1988;Hobbs, 1990;Lascarides and Asher, 1993;Knott and Sanders, 1998;Webber, 2004), most admit these basic types of discourse relationships between textual units.
When there is a discourse connective (e.g., because) between two text spans, it is often easy to recognize the relation between the spans, as most connectives are unambiguous (Miltsakaki et al., 2005;Pitler et al., 2008). On the other hand, it is difficult to recognize the discourse relations when there are no explicit textual cues. We term these cases explicit and implicit relations, respectively.
While the recognition of discourse structure has been studied in the context of explicit relations (Marcu, 1998) in the past, little published work has yet attempted to recognize implicit discourse relations between text spans.
Detecting implicit relations is a critical step in forming a discourse understanding of text, as many text spans do not mark their discourse relations with explicit cues. Recently, the Penn Discourse Treebank (PDTB) has been released, which features discourse level annotation on both explicit and implicit relations. It provides a valuable linguistic resource towards understanding discourse relations and a common platform for researchers to develop discourse-centric systems. With the recent release of the second version of this corpus (Prasad et al., 2008), which provides a cleaner and more thorough implicit relation annotation, there is an opportunity to address this area of work.
In this paper, we provide classification of implicit discourse relations on the second version of the PDTB. The features we used include contextual modeling of relation dependencies, features extracted from constituent parse trees and dependency parse trees, and word pair features. We show an accuracy of 40.2%, which is a significant improvement of 14.1% over the majority baseline.
After reviewing related work, we first give an overview of the Penn Discourse Treebank. We then describe our classification methodology, followed by experimental results. We give a detailed discussion on the difficulties of implicit relation classification in the PDTB, and then conclude the paper.
Related Work
One of the first works that use statistical methods to detect implicit discourse relations is that of Marcu and Echihabi (2002). They showed that word pairs extracted from two text spans provide clues for detecting the discourse relation between the text spans. They used a set of textual patterns to automatically construct a large corpus of text span pairs from the web. These text spans were assumed to be instances of specific discourse relations. They removed the discourse connectives from the pairs to form an implicit relation corpus. From this corpus, they collected word pair statistics, which were used in a Naïve Bayes framework to classify discourse relations. Saito et al. (2006) extended this theme, to show that phrasal patterns extracted from a text span pair provide useful evidence in the relation classification. For example, the pattern "... should have done ..." usually signals a contrast. The authors combined word pairs with phrasal patterns, and conducted experiments with these two feature classes to recognize implicit relations between adjacent sentences in a Japanese corpus.
Both of these previous works have the shortcoming of downgrading explicit relations to implicit ones by removing the explicit discourse connectives. While this is a good approach to automatically create large corpora, natively implicit relations may be signaled in different ways. The fact that explicit relations are explicitly signaled indicates that such relations need a cue to be unambiguous to human readers. Thus, such an artificial implicit relation corpus may exhibit marked differences from a natively implicit one. We validate this claim later in this work. Wellner et al. (2006) used multiple knowledge sources to produce syntactic and lexico-semantic features, which were then used to automatically identify and classify explicit and implicit discourse relations in the Discourse Graphbank (Wolf and Gibson, 2005). Their experiments show that discourse connectives and the distance between the two text spans have the most impact, and event-based features also contribute to the performance. However, their system may not work well for implicit relations alone, as the two most prominent features only apply to explicit relations: implicit relations do not have discourse connectives and the two text spans of an implicit relation are usually adjacent to each other.
The work that is most related to ours is the forthcoming paper of Pitler et al. (2009) on implicit relation classification on the second version of the PDTB. They performed classification of implicit discourse relations using several linguistically informed features, such as word polar-ity, verb classes, and word pairs, showing performance increases over a random classification baseline.
Overview of the Penn Discourse Treebank
The Penn Discourse Treebank (PDTB) is a discourse level annotation (Prasad et al., 2008) over the one million word Wall Street Journal corpus. The PDTB adopts the predicate-argument view of discourse relations, where a discourse connective (e.g., because) is treated as a predicate that takes two text spans as its arguments. The argument that the discourse connective structurally attaches to is called Arg2, and the other argument is called Arg1. The PDTB provides annotations for explicit and implicit discourse relations. By definition, an explicit relation contains an explicit discourse connective. In the PDTB, 100 explicit connectives are annotated. Example 1 shows an explicit Contrast relation signaled by the discourse connective but.
The last line shows the relation type and the file in the PDTB from which the example is drawn.
(1) Arg1: In any case, the brokerage firms are clearly moving faster to create new ads than they did in the fall of 1987. Arg2: But it remains to be seen whether their ads will be any more effective. (Contrast -wsj 2201) In the PDTB, implicit relations are constrained by adjacency: only pairs of adjacent sentences within paragraphs are examined for the existence of implicit relations. When an implicit relation was inferred by an annotator, he/she inserted an implicit connective that best reflects the relation. Example 2 shows an implicit relation, where the annotator inferred a Cause relation and inserted an implicit connective so (i.e., the original text does not include so). The text in the box (he says) shows the attribution, i.e., the agent that expresses the arguments. The PDTB provides annotation for the attributions and supplements of the arguments.
(2) Arg1: "A lot of investor confidence comes from the fact that they can speak to us," he says . Arg2: [so] "To maintain that dialogue is absolutely crucial." (Cause -wsj 2201)
The PDTB provides a three level hierarchy of relation tags for its annotation. The first level consists of four major relation classes: Temporal, Contingency, Comparison, and Expansion. For each class, a second level of types is defined to provide finer semantic distinctions. A third level of subtypes is defined for only some types to specify the semantic contribution of each argument. Relation classes and types in the PDTB are reproduced in the first two columns of Table 1.
We focus on implicit relation classification of the Level 2 types in the PDTB, as we feel that Level 1 classes are too general and coarse-grained for downstream applications, while Level 3 subtypes are too fine-grained and are only provided for some types. Table 1 shows the distribution of the 16 Level 2 relation types of the implicit relations from the training sections, i.e., Sections 2 -21. As there are too few training instances for Condition, Pragmatic Condition, Pragmatic Contrast, Pragmatic Concession, and Exception, we removed these five types from further consideration. We thus use the remaining 11 Level 2 types in our work. The initial distribution and adjusted distribution are shown in the last two columns of the
Methodology
Our implicit relation classifier is built using supervised learning on a maximum entropy classifier. As such, our approach processes the annotated argument pairs into binary feature vectors suitable for use in training a classifier. Attributions and supplements are ignored from the relations, as our system does not make use of them. We chose the following four classes of features as they represent a wide range of information -contextual, syntactic, and lexical -that have been shown to be helpful in previous works and tasks. We now discuss the four categories of features used in our framework. Contextual Features. Lee et al. (2006) showed that there are a variety of possible dependencies between pairs of discourse relations: independent, fully embedded argument, shared argument, properly contained argument, pure crossing, and partially overlapping argument. They argued that the last three cases -properly contained argument, pure crossing, and partially overlapping argument -can be factored out by appealing to discourse notions such as anaphora and attribution. Moreover, we also observed from the PDTB corpus that fully embedded argument and shared argument are the most common patterns, which are shown in Figure 1. The top portion of Figure 1 shows a case where relation r 1 is fully embedded in Arg1 of relation r 2 , and the bottom portion shows r 1 and r 2 sharing an argument. We model these two patterns as contextual features. We believe that these discourse dependency patterns between a pair of adjacent relations are useful in identifying the relations. For example, if we have three items in a list, according to the PDTB binary predicate-argument definitions, there will be a List relation between the first item and the second item, and another List relation between the previous List relation and the third item, where the previous List relation is fully embedded in Arg1 of the current List relation. As we are using the gold standard argument segmentation from the PDTB, we can extract and leverage these dependency patterns. For each relation curr, we use the previous relation prev and the next relation next as evidence to fire six binary features, as defined in Table 2.
Note that while curr is an implicit relation to be classified, both prev and next can be implicit or explicit relations. Pitler et al. (2008) showed that the type of a relation sometimes correlates to the type of its adjacent relation. When the adjacent relation is explicit, its type may be suggested by its discourse connective. Thus we include another two groups of contextual features representing the connectives of prev and next when they are explicit relations.
Fully embedded argument:
prev embedded in curr.Arg1 next embedded in curr.Arg2 curr embedded in prev.Arg2 curr embedded in next.Arg1 Shared argument:
prev.Arg2 = curr.Arg1 curr.Arg2 = next.Arg1 Table 2: Six contextual features derived from two discourse dependency patterns. curr is the relation we want to classify.
Constituent Parse Features.
Research work from other NLP areas, such as semantic role labeling, has shown that features derived from syntactic trees are useful in semantic understanding. Such features include syntactic paths (Jiang and Ng, 2006) and tree fragments (Moschitti, 2004). From our observation of the PDTB relations, syntactic structure within one argument may constrain the relation type and the syntactic structure of the other argument. For example, the constituent parse structure in Figure 2(a) usually signals an Asynchronous relation when it appears in Arg2, as shown in Example 3, while the structure in Figure 2(b) usually acts as a clue for a Cause relation when it appears in Arg1, as shown in Example 4.
In both examples, the lexicalized parts of the parse structure are bolded.
(3) Arg1: But the RTC also requires "working" capital to maintain the bad assets of thrifts that are sold For Arg1 and Arg2 of each relation, we extract the corresponding gold standard syntactic parse trees from the corpus. As an argument can be a single sentence, a clause, or multiple sentences, this results in a whole parse tree, parts of a parse tree, or multiple parse trees. From these parses, we extract all possible production rules. Although the structures shown in Figure 2 are tree fragments, tree fragments are not extracted as production rules act as generalization of tree fragments. As an example, Figure 3 shows the parse tree for Arg1 of an implicit discourse relation from the text wsj 2224. As Arg1 is a clause, the extracted tree is a subtree. We then collect all production rules from this subtree, with function tags (e.g., SBJ) removed from internal nodes. POS tag to word production rules are collected as well. The resulting production rules include ones such as: S → NP VP, NP → PRP, PRP → "We", etc. Each production rule is represented as three binary features to check whether this rule appears in Arg1, Arg2, and both arguments.
Dependency Parse Features. We also experimented with features extracted from dependency trees of the arguments. We used the Stanford dependency parser (de Marneffe et al., 2006), which takes in a constituent parse tree and produces a dependency tree. Again, for an argument, we may collect a whole dependency tree, parts of a tree, or multiple trees, depending on the span of the argument. The reason for using dependency trees is that they encode additional information at the word level that is not explicitly present in the constituent trees. From each tree, we collect all words with the dependency types from their dependents. Figure 4 shows the dependency subtree for the same example in Figure 3, from which we collect three dependency rules: "had" ← nsubj dobj, "problems" ← det nn advmod, "at" ← dep.
Note that unlike the constituent parse features which are guaranteed to be accurate (as they are extracted from the gold parses of the corpus), the dependency parses occasionally contain errors. As with the constituent parse features, each dependency rule is represented as three binary features to check whether it appears in Arg1, Arg2, and both arguments. Lexical Features. Marcu and Echihabi (2002) demonstrated that word pairs extracted from the respective text spans are a good signal of the discourse relation between arguments. Thus we also consider word pairs as a feature class. We stemmed and collected all word pairs from Arg1 and Arg2, i.e., all (w i , w j ) where w i is a word from Arg1 and w j a word from Arg2. Unlike their study, we limit the collection of word pair statistics to occurrences only in the PDTB corpus.
Feature Selection
For the collection of production rules, dependency rules, and word pairs, we used a frequency cutoff of 5 to remove infrequent features. From the implicit relation dataset of the training sections (i.e., Sec. 2 -21), we extracted 11,113 production rules, 5,031 dependency rules, and 105,783 word pairs in total. We applied mutual information (MI) to these three classes of features separately, resulting in three ranked lists. A feature f has 11 MI values with all 11 types (for example, M I(f, Cause) and M I(f, Restatement)), and we used the MI with the highest value for a feature to select features. In our experiments, the top features from the lists are used in the training and test phases.
Experiments
We experimented with a maximum entropy classifier from the OpenNLP MaxEnt package using various combinations of features to assess their efficacy. We used PDTB Sections 2 -21 as our training set and Section 23 as the test set, and only used the implicit discourse relations.
In the PDTB, about 2.2% of the implicit relations are annotated with two types, as shown in Example 7 in Section 6. During training, a relation that is annotated with two types is considered as two training instances, each with one of the types. During testing, such a relation is considered one test instance, and if the classifier assigns either of the two types, we consider it as correct. Thus, the test accuracy is calculated as the number of correctly classified test instances divided by the total number of test instances.
In our work, we use the majority class as the baseline, where all instances are classified as Cause. This yields an accuracy of 26.1% on the test set. A random baseline yields an even lower accuracy of 9.1% on the test set.
Results and Analysis
To check the efficacy of the different feature classes, we trained individual classifiers on all features within a single feature class (Rows 1 to 4 in Table 3) as well as a single classifier trained with all features from all feature classes (Row 5). Among the four individual feature classes, production rules and word pairs yield significantly better performance over the baseline with p < 0.01 and p < 0.05 respectively, while context features perform slightly better than the baseline. Interestingly, we noted that the performance with all dependency rules is slightly lower than the baseline (Row 2), and applying all feature classes does not yield the highest accuracy (Row 5), which we suspected were due to noise. To confirm this, we employed MI to select the top 100 production rules and dependency rules, and the top 500 word pairs (as word pairs are more sparse). We then repeated the same set of experiments, as shown in Table 4 (Row 4 of this table is repeated from Table 3 for consistency). With only the top features, production rules, dependency rules, and word pairs all gave significant improvement over the baseline with p < 0.01. When we used all feature classes, as in the last row, we obtained the highest accuracy of 40.2%. Table 4 also validates the pattern of predictiveness of the feature classes: production rules contribute the most to the performance individually, followed by word pairs, dependency rules, and finally, context features. A natural question to ask is whether any of these feature classes can be omitted to achieve the same level of performance as the combined classifier. To answer this question, we conducted a final set of experiments, in which we gradually added in feature classes in the or-der of their predictiveness (i.e., production rules word pairs dependency rules context features), with results shown in Table 5. These results confirm that each additional feature class indeed contributes a marginal performance improvement, (although it is not significant) and that all feature classes are needed for optimal performance. Table 5: Accuracy with feature classes gradually added in the order of their predictiveness.
Note that Row 3 of Table 3 corresponds to Marcu and Echihabi (2002)'s system which applies only word pair features. The difference is that they used a Naïve Bayes classifier while we used a maximum entropy classifier. As we did not implement their Naïve Bayes classifier, we compare their method's performance using the result from Table 3 Row 3 with ours from Table 5 Row 4, which shows that our system significantly (p < 0.01) outperforms theirs. Table 6: Recall, precision, F 1 , and counts for 11 Level 2 relation types. "-" indicates 0.00. Table 6 shows the recall, precision, and F 1 measure for the 11 individual Level 2 relation types in the final experiment set up (Row 4 from Table 5). A point worth noting is that the classifier labels no instances of the Synchrony, Pragmatic Cause, Concession, and Alternative relation types. The reason is that the percentages for these four types are so small that the classifier is highly skewed towards the other types. From the distribution shown in Table 1, there are just 4.76% training data for these four types, but 95.24% for the remaining seven types. In fact, only 30 test instances are labeled with these four types, as shown in the last column of Table 6. As Cause is the most pre-dominant type in the training data, the classifier tends to label uncertain relations as Cause, thus giving Cause high recall but low precision. We see that the F measures correlate well with the training data frequency, thus we hypothesize that accuracy may improve if more training data for low frequency relations can be provided.
Our work differs from that of (Pitler et al., 2009) in that our system performs classification at the more fine-grained Level 2 types, instead of the coarse-grained Level 1 classes. Their system applies a Naïve Bayes classifier whereas our system uses a maximum entropy classifier, and the sets of features used are also different. In addition, the data set of (Pitler et al., 2009) includes EntRel and AltLex, which are relations in which an implicit connective cannot be inserted between adjacent sentences, whereas ours excludes EntRel and AltLex.
6 Discussion: Why are implicit discourse relations difficult to recognize?
In the above experiments, we have shown that by using the four feature classes, we are able to increase the classification accuracy from 26.1% of the majority baseline to 40.2%. Although we feel a 14.1 absolute percentage improvement is a solid result, an accuracy of 40% does not allow downstream NLP applications to trust the output of such a classification system. To understand the difficulties of the task more deeply, we analyzed individual training and validation data pairs, from which we were able to generalize four challenges to automated implicit discourse relation recognition. We hope that this discussion may motivate future work on implicit discourse relation recognition.
Ambiguity. There is ambiguity among the relations. For example, we notice that a lot of Contrast relations are mistakenly classified as Conjunction. When we analyzed these relations, we observed that Contrast and Conjunction in the PDTB annotation are very similar to each other in terms of words, syntax, and semantics, as Examples 5 and 6 show. In both examples, the same antonymous verb pair is used (fell and rose), different subjects are mentioned in Arg1 and Arg2 (net and revenue in the first example, and net and sales in the second), and these subjects are all compared to like items from the previous year. Moreover, the implicit discourse connective given by the annotators is while in both cases, which is an ambiguous connective as shown in (Miltsakaki et al., 2005 Relation ambiguity may be ameliorated if an instance is analyzed in context. However, according to the PDTB annotation guidelines, if the annotators could not disambiguate between two relation types, or if they felt both equally reflect their understanding of the relation between the arguments, they could annotate two types to the relation. In the whole PDTB corpus, about 5.4% of the explicit relations and 2.2% of the implicit relations are annotated with two relation types. Example 7 is such a case where the implicit connective meanwhile may be interpreted as expressing a Conjunction or Contrast relation. Inference. Sometimes inference and a knowledge base are required to resolve the relation type. In Example 8, to understand that Arg2 is a restatement of Arg1, we need a semantic mechanism to show that either the semantics of Arg1 infers that of Arg2 or the other way around. In the below example, I had calls all night long infers I was woken up every hour semantically, as shown in: receive call(I) ∧ duration(all night) ⇒ woken up(I) ∧ duration(every hour). In fact, most relation types can be represented using formal semantics (PDTB-Group, 2007), as shown in Table 7, where |Arg1| and |Arg2| represent the semantics extracted from Arg1 and Arg2, respectively. This kind of formal semantic reasoning requires a robust knowledge base, which is still beyond our current technology. Context. PDTB annotators adopted the Minimality Principle in argument selection, according to which they only included in the argument the minimal span of text that is sufficient for the interpretation of the relation. While the context is not necessary to interpret the relation, it is usually necessary to understand the meaning of the arguments. Without an analysis of the context, Arg1 and Arg2 may seem unconnected, as the following example shows, where the meaning of Arg1 is mostly derived from its previous context (i.e., West German ... technical reactions).
Relation type Semantic representation
Cause |Arg1| ≺ |Arg2| ∨ |Arg2| ≺ |Arg1| Concession A ≺ C ∧ B ⇒ ¬C where A ∈ |Arg1|, B ∈ |Arg2| Instantiation exemplif y(|Arg2|, λx.x ∈ E) where E = extract(|Arg1|) Restatement |Arg1| ⇒ |Arg2| ∨ |Arg1| ⇐ |Arg2| Alternative |Arg1| ∧ |Arg2| ∨ |Arg1| ⊕ |Arg2|
(9) Prev. Context: West German Economics Minister Helmut Haussmann said, "In my view, the stock market will stabilize relatively quickly. There may be one or other psychological or technical reactions, Arg1: but they aren't based on fundamentals. Arg2: [in short] The economy of West Germany and the EC European Community is highly stable." (Conjunction -wsj 2210)
Sometimes the range of the context may easily extend to the whole text, which would require a system to possess a robust context modeling mechanism. In Example 10, in order to realize the causal relation between Arg2 and Arg1, we possibly need to read the whole article and understand what was happening: the machinist union was having a strike and the strike prevented most of its union members from working.
(10) Arg1: And at the company's Wichita, Kan., plant, about 2,400 of the 11,700 machinists still are working, Boeing said. Arg2: [because] Under Kansas right-to-work laws, contracts cannot require workers to be union members.
(Cause -wsj 2208)
World Knowledge. Sometimes even context modeling is not enough. We may also need world knowledge to understand the arguments and hence to interpret the relation. In the following example, from the previous sentence of Arg1, it is reported that "the Senate voted to send a delegation of congressional staffers to Poland to assist its legislature", and this delegation is viewed as a "gift" in Arg1. It is suggested in Arg2 that the Poles might view the delegation as a "Trojan Horse". Here we need world knowledge to understand that "Trojan Horse" is usually applied as a metaphor for a person or thing that appears innocent but has harmful intent, and hence understand that Arg2 poses a contrasting view of the delegation as Arg1 does.
(11) Arg1: Senator Pete Domenici calls this effort "the first gift of democracy". Arg2:
[but] The Poles might do better to view it as a Trojan Horse.
(Contrast -wsj 2237)
These four classes of difficulties -ambiguity between relations, inference, contextual modeling, and world knowledge -show that implicit discourse relation classification needs deeper semantic representations, more robust system design, and access to more external knowledge. These obstacles may not be restricted to recognizing implicit relations, but are also applicable to other related discourse-centric tasks.
Conclusion
We implemented an implicit discourse relation classifier and showed initial results on the recently released Penn Discourse Treebank. The features we used include the modeling of the context of relations, features extracted from constituent parse trees and dependency parse trees, and word pair features. Our classifier achieves an accuracy of 40.2%, a 14.1% absolute improvement over the baseline. We also conducted a data analysis and discussed four challenges that need to be addressed in future to overcome the difficulties of implicit relation classification in the PDTB.
Figure 1 :
1Two types of discourse dependency structures. Top: fully embedded argument, bottom: shared argument.
Figure 3 :
3A gold standard subtree for Arg1 of an implicit discourse relation from wsj 2224.
Figure 4 :
4A dependency subtree for Arg1 of an implicit discourse relation from wsj 2224.
( 8 )
8Arg1: "I had calls all night long from the States," he said. Arg2: "[in fact] I was woken up every hour -1:30, 2:30, 3:30, 4:30." (Restatement -wsj 2205)
table. We see that the three predominant types are Cause (25.63%), Conjunction (22.25%), and Restatement (19.23%).Level 1 Class
Level 2 Type
Training
%
Adjusted %
instances
Temporal
Asynchronous
583
4.36
4.36
Synchrony
213
1.59
1.59
Contingency
Cause
3426
25.61
25.63
Pragmatic
69
0.52
0.52
Cause
Condition
1
0.01
-
Pragmatic
1
0.01
-
Condition
Comparison
Contrast
1656
12.38
12.39
Pragmatic
4
0.03
-
Contrast
Concession
196
1.47
1.47
Pragmatic
1
0.01
-
Concession
Expansion
Conjunction
2974
22.24
22.25
Instantiation
1176
8.79
8.80
Restatement
2570
19.21
19.23
Alternative
158
1.18
1.18
Exception
2
0.01
-
List
345
2.58
2.58
Total
13375
Adjusted total
13366
Table 1 :
1Distribution of Level 2 relation types of
implicit relations from the training sections (Sec.
2 -21). The last two columns show the initial
distribution and the distribution after removing the
five types that have only a few training instances.
Table 3 :
3Classification accuracy with all features
from each feature class. Rows 1 to 4: individual
feature class; Row 5: all feature classes.
Table 4 :
4Classification accuracy with top rules/word pairs for each feature class. Rows 1 to 4: individual feature class; Row 5: all feature classes.
( 7 )
7Arg1: Sales surged 40% to 250.17 billion yen from 178.61 billion. Arg2: [meanwhile] Net income rose 11% to 29.62 billion yen from 26.68 billion. (Conjunction; Contrast -wsj 2242)
Table 7 :
7Some examples of relation types with their semantic representations, as taken from(PDTB-Group, 2007).
Generating typed dependency parses from phrase structure parses. Marie-Catherine De Marneffe, Bill Maccartney, Christopher D Manning, Proceedings of the Fifth International Conference on Language Resources and Evaluation (LREC 2006). the Fifth International Conference on Language Resources and Evaluation (LREC 2006)Marie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the Fifth International Confer- ence on Language Resources and Evaluation (LREC 2006), pages 449-454.
Literature and cognition. R Jerry, Hobbs, CSLI Lecture Notes Number 21. CSLI PublicationsJerry R. Hobbs. 1990. Literature and cognition. In CSLI Lecture Notes Number 21. CSLI Publications.
Semantic role labeling of NomBank: A maximum entropy approach. Ping Zheng, Hwee Tou Jiang, Ng, Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing. the 2006 Conference on Empirical Methods in Natural Language ProcessingSydney, AustraliaZheng Ping Jiang and Hwee Tou Ng. 2006. Semantic role labeling of NomBank: A maximum entropy ap- proach. In Proceedings of the 2006 Conference on Empirical Methods in Natural Language Processing (EMNLP 2006), pages 138-145, Sydney, Australia.
The classification of coherence relations and their linguistic markers: An exploration of two languages. Alistair Knott, Ted Sanders, Journal of Pragmatics. 302Alistair Knott and Ted Sanders. 1998. The classifica- tion of coherence relations and their linguistic mark- ers: An exploration of two languages. Journal of Pragmatics, 30(2):135-175.
Temporal interpretation, discourse relations and commonsense entailment. Alex Lascarides, Nicholas Asher, Linguistics and Philosophy. 165Alex Lascarides and Nicholas Asher. 1993. Temporal interpretation, discourse relations and commonsense entailment. Linguistics and Philosophy, 16(5):437- 493.
Complexity of dependencies in discourse: Are dependencies in discourse more complex than in syntax?. Alan Lee, Rashmi Prasad, Aravind Joshi, Nikhil Dinesh, Bonnie Webber, Proceedings of the 5th International Workshop on Treebanks and Linguistic Theories. the 5th International Workshop on Treebanks and Linguistic TheoriesPrague, Czech RepublicAlan Lee, Rashmi Prasad, Aravind Joshi, Nikhil Di- nesh, and Bonnie Webber. 2006. Complexity of dependencies in discourse: Are dependencies in dis- course more complex than in syntax? In Proceed- ings of the 5th International Workshop on Treebanks and Linguistic Theories, Prague, Czech Republic, December.
Rhetorical Structure Theory: Toward a functional theory of text organization. C William, Sandra A Mann, Thompson, Text. 83William C. Mann and Sandra A. Thompson. 1988. Rhetorical Structure Theory: Toward a functional theory of text organization. Text, 8(3):243-281.
An unsupervised approach to recognizing discourse relations. Daniel Marcu, Abdessamad Echihabi, Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002). the 40th Annual Meeting of the Association for Computational Linguistics (ACL 2002)Morristown, NJ, USADaniel Marcu and Abdessamad Echihabi. 2002. An unsupervised approach to recognizing discourse re- lations. In Proceedings of the 40th Annual Meet- ing of the Association for Computational Linguistics (ACL 2002), pages 368-375, Morristown, NJ, USA.
A surface-based approach to identifying discourse markers and elementary textual units in unrestricted texts. Daniel Marcu, Proceedings of the COLING-ACL 1998 Workshop on Discourse Relations and Discourse Markers. the COLING-ACL 1998 Workshop on Discourse Relations and Discourse MarkersMontreal, CanadaDaniel Marcu. 1998. A surface-based approach to identifying discourse markers and elementary tex- tual units in unrestricted texts. In Proceedings of the COLING-ACL 1998 Workshop on Discourse Rela- tions and Discourse Markers, pages 1-7, Montreal, Canada, August.
Experiments on sense annotations and sense disambiguation of discourse connectives. Eleni Miltsakaki, Nikhil Dinesh, Rashmi Prasad, Aravind Joshi, Bonnie Webber, Proceedings of the Fourth Workshop on Treebanks and Linguistic Theories (TLT2005). the Fourth Workshop on Treebanks and Linguistic Theories (TLT2005)Barcelona, SpainEleni Miltsakaki, Nikhil Dinesh, Rashmi Prasad, Ar- avind Joshi, and Bonnie Webber. 2005. Experi- ments on sense annotations and sense disambigua- tion of discourse connectives. In Proceedings of the Fourth Workshop on Treebanks and Linguistic The- ories (TLT2005), Barcelona, Spain, December.
A study on convolution kernels for shallow semantic parsing. Alessandro Moschitti, Proceedings of the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004). the 42nd Annual Meeting of the Association for Computational Linguistics (ACL 2004)Barcelona, SpainAlessandro Moschitti. 2004. A study on convolu- tion kernels for shallow semantic parsing. In Pro- ceedings of the 42nd Annual Meeting of the Asso- ciation for Computational Linguistics (ACL 2004), Barcelona, Spain.
The Penn Discourse Treebank 2.0 Annotation Manual. The PDTB Research Group. Pdtb-Group, PDTB-Group, 2007. The Penn Discourse Treebank 2.0 Annotation Manual. The PDTB Research Group, December.
Easily identifiable discourse relations. Emily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, Aravind Joshi, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsManchester, UKEmily Pitler, Mridhula Raghupathy, Hena Mehta, Ani Nenkova, Alan Lee, and Aravind Joshi. 2008. Eas- ily identifiable discourse relations. In Proceedings of the 22nd International Conference on Compu- tational Linguistics (COLING 2008), Manchester, UK, August.
Automatic sense prediction for implicit discourse relations in text. Emily Pitler, Annie Louis, Ani Nenkova, Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language Processing. the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Language Processing of the Asian Federation of Natural Language ProcessingTo appear inEmily Pitler, Annie Louis, and Ani Nenkova. 2009. Automatic sense prediction for implicit discourse relations in text. To appear in Proceedings of the Joint Conference of the 47th Annual Meeting of the Association for Computational Linguistics and the 4th International Joint Conference on Natural Lan- guage Processing of the Asian Federation of Natural Language Processing (ACL-IJCNLP 2009).
The Penn Discourse Treebank 2.0. Rashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Miltsakaki, Livio Robaldo, Aravind Joshi, Bonnie Webber, Proceedings of the 6th International Conference on Language Resources and Evaluation (LREC. the 6th International Conference on Language Resources and Evaluation (LRECRashmi Prasad, Nikhil Dinesh, Alan Lee, Eleni Milt- sakaki, Livio Robaldo, Aravind Joshi, and Bon- nie Webber. 2008. The Penn Discourse Treebank 2.0. In Proceedings of the 6th International Confer- ence on Language Resources and Evaluation (LREC 2008).
Using phrasal patterns to identify discourse relations. Manami Saito, Kazuhide Yamamoto, Satoshi Sekine, Proceedings of the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2006). the Human Language Technology Conference of the North American Chapter of the Association for Computational Linguistics (HLT-NAACL 2006)New York, USAManami Saito, Kazuhide Yamamoto, and Satoshi Sekine. 2006. Using phrasal patterns to iden- tify discourse relations. In Proceedings of the Hu- man Language Technology Conference of the North American Chapter of the Association for Computa- tional Linguistics (HLT-NAACL 2006), pages 133- 136, New York, USA, June.
D-LTAG: Extending lexicalized TAG to discourse. Bonnie Webber, Cognitive Science. 285Bonnie Webber. 2004. D-LTAG: Extending lex- icalized TAG to discourse. Cognitive Science, 28(5):751-779, September.
Classification of discourse coherence relations: An exploratory study using multiple knowledge sources. Ben Wellner, James Pustejovsky, Catherine Havasi, Proceedings of the 7th SIGdial Workshop on Discourse and Dialogue. the 7th SIGdial Workshop on Discourse and DialogueSydney, AustraliaAnna Rumshisky, and Roser SauriBen Wellner, James Pustejovsky, Catherine Havasi, Anna Rumshisky, and Roser Sauri. 2006. Clas- sification of discourse coherence relations: An ex- ploratory study using multiple knowledge sources. In Proceedings of the 7th SIGdial Workshop on Dis- course and Dialogue, Sydney, Australia, July.
Representing discourse coherence: a corpus-based analysis. Florian Wolf, Edward Gibson, Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004). the 20th International Conference on Computational Linguistics (COLING 2004)Morristown, NJ, USAFlorian Wolf and Edward Gibson. 2005. Representing discourse coherence: a corpus-based analysis. In Proceedings of the 20th International Conference on Computational Linguistics (COLING 2004), pages 134-140, Morristown, NJ, USA. |
44,127,792 | Linguistic Cues to Deception and Perceived Deception in Interview Dialogues | We explore deception detection in interview dialogues. We analyze a set of linguistic features in both truthful and deceptive responses to interview questions. We also study the perception of deception, identifying characteristics of statements that are perceived as truthful or deceptive by interviewers. Our analysis show significant differences between truthful and deceptive question responses, as well as variations in deception patterns across gender and native language. This analysis motivated our selection of features for machine learning experiments aimed at classifying globally deceptive speech. Our best classification performance is 72.74 F1-Score (about 27% better than human performance), which is achieved using a combination of linguistic features and individual traits. | [
7842466,
5805445,
1947247,
15843994,
10186140
] | Linguistic Cues to Deception and Perceived Deception in Interview Dialogues
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 1 -6. 2018. 2018
Sarah Ita Levitan sarahita@cs
Department of Computer Science
Columbia University New York
NYUSA
Angel Maredia
Department of Computer Science
Columbia University New York
NYUSA
Julia Hirschberg
Department of Computer Science
Columbia University New York
NYUSA
Linguistic Cues to Deception and Perceived Deception in Interview Dialogues
Proceedings of NAACL-HLT 2018
NAACL-HLT 2018New Orleans, LouisianaAssociation for Computational LinguisticsJune 1 -6. 2018. 2018
We explore deception detection in interview dialogues. We analyze a set of linguistic features in both truthful and deceptive responses to interview questions. We also study the perception of deception, identifying characteristics of statements that are perceived as truthful or deceptive by interviewers. Our analysis show significant differences between truthful and deceptive question responses, as well as variations in deception patterns across gender and native language. This analysis motivated our selection of features for machine learning experiments aimed at classifying globally deceptive speech. Our best classification performance is 72.74 F1-Score (about 27% better than human performance), which is achieved using a combination of linguistic features and individual traits.
Introduction
Deception detection is a critical problem studied by psychologists, criminologists, and computer scientists. In recent years the NLP and speech communities have increased their interest in deception detection. Language cues are inexpensive and easy to collect, and research examining text-based and speech-based cues to deception has been quite promising. Prior work has examined deceptive language in several domains, including fake reviews, mock crime scenes, and opinions about topics such as abortion or the death penalty. In this work we explore the domain of interview dialogues, which are similar to many real-world deception conditions. Previous work has presented the results of classification experiments using linguistic features, attempting to identify which features contribute most to classification accuracy. However, studies often do not include an empirical analysis of features. We might know that a particular feature set (e.g. LIWC categories) is useful for deception classification, but we lack insight about the nature of the deceptive and truthful language that makes the feature set useful, and whether the differences in language use are statistically significant. In this work we conduct an empirical analysis of feature sets and report on the different characteristics of truthful and deceptive language. In addition, previous work has focused on the characteristics of deceptive language, and not on the characteristics of perceived deceptive language. We are also interested in human perception of deception; that is, what are the characteristics of language that listeners perceive as truthful or deceptive? We examine a unique dataset that includes information about both the deceiver and the interviewer, along with interviewer judgments of deception. Along with an analysis of deceptive and truthful speech, we analyze the believed and disbelieved speech, according to reported interviewer judgments. Finally, previous work has focused on general inferences about deception; here we include analysis of gender and native language, to study their effect on deceptive behavior, and also their effect on perception of deception. This work contributes to the critical problem of automatic deception detection, and increases our scientific understanding of deception, deception perception, and speaker differences in deceptive behavior.
The paper is organized as follows: In Section 2 we review related work in language-based cues to deception. Section 3 describes the dataset used for this work, and Section 4 details the different feature sets we employ. In Section 5, we report on the results of our empirical study of indicators of deception and perceived deception, as well as gender and native language differences. Section 6 presents our machine learning classification results using the deception indicator feature sets. We conclude in Section 7 with a discussion and ideas 1941 for future work.
Related Work
Language-based cues to deception have been analyzed in many genres. Ott et al. (2011) compared approaches to automatically detecting deceptive opinion spam, using a crowdsourced dataset of fake hotel reviews. Several studies use a fake opinion paradigm for collecting data, instructing subjects to write or record deceptive and truthful opinions about controversial topics such as the death penalty or abortion, or about a person that they like/dislike (Newman et al., 2003;Mihalcea and Strapparava, 2009). Other research has focused on real-world data obtained from court testimonies and depositions (Fornaciari and Poesio, 2013;Bachenko et al., 2008;. Real-world deceptive situations are highstakes, where there is much to be gained or lost if deception succeeds or fails; it is hypothesized that these conditions are more likely to elicit strong cues to deception. However, working with such data requires extensive research to annotate each utterance for veracity, so such datasets are often quite small and not always reliable.
Linguistic features such as n-grams and language complexity have been analyzed as cues to deception Yancheva and Rudzicz, 2013). Syntactic features such as part of speech tags have also been found to be useful for structured data (Ott et al., 2011;Feng et al., 2012). Statement Analysis (Adams, 1996) is a text-based deception detection approach that combines lexical and syntactic features. An especially useful resource for text-based deception detection is the Linguistic Inquiry and Word Count (LIWC) (Pennebaker and King, 1999), which groups words into psychologically motivated categories. In addition to lexical features, some studies have examined acousticprosodic cues to deception (Rockwell et al., 1997;Enos, 2009;Mendels et al., 2017). (Benus et al., 2006) studied pause behavior in deceptive speech. This work is very promising, but it is more difficult to obtain large, cleanly recorded speech corpora with deception annotations than to obtain text corpora. An excellent meta-study of verbal cues to deception can be found in (DePaulo et al., 2003).
Data
Corpus
For this work, we examined the Columbia X-Cultural Deception (CXD) Corpus (Levitan et al., 2015a) a collection of within-subject deceptive and non-deceptive speech from native speakers of Standard American English (SAE) and Mandarin Chinese (MC), all speaking in English. The corpus contains dialogues between 340 subjects. A variation of a fake resume paradigm was used to collect the data. Previously unacquainted pairs of subjects played a "lying game" with each other. Each subject filled out a 24-item biographical questionnaire and were instructed to create false answers for a random half of the questions. They also reported demographic information including gender and native language, and completed the NEO-FFI personality inventory (Costa and McCrae, 1989).
The lying game was recorded in a sound booth. For the first half of the game, one subject assumed the role of the interviewer, while the other answered the biographical questions, lying for half and telling the truth for the other; questions chosen in each category were balanced across the corpus. For the second half of the game, the subjects roles were reversed, and the interviewer became the interviewee. During the game, the interviewer was allowed to ask the 24 questions in any order s/he chose; the interviewer was also encouraged to ask follow-up questions to aid them in determining the truth of the interviewees answers. Interviewers recorded their judgments for each of the 24 questions, providing information about human perception of deception. The entire corpus was orthographically transcribed using the Amazon Mechanical Turk (AMT) 1 crowd-sourcing platform, and the speech was segmented into inter-pausal units (IPUs), defined as pause-free segments of speech separated by a minimum pause length of 50 ms. The speech was also segmented into turn units, where a turn is defined as a maximal sequence of IPUs from a single speaker without any interlocutor speech that is not a backchannel. There are two forms of deception annotations in the corpus: local and global. Interviewees labeled their responses with local annotations by pressing a "T" or "F" key for each utterance as they spoke. These keypresses were automatically aligned with speaker IPUs and turns. Global la-bels were provided by the biographical questionnaire, where each of the 24 questions was labeled as truthful or deceptive.
Consider the following dialogue: Interviewer: What is your mother's job? Interviewee: My mother is a doctor (F). She has always worked very late hours and I felt neglected as a child (T).
Is the interviewee response true or false? We differentiate between global and local deception. Globally, the response to the question is deceptive. However, it contains local instances of both truth and deception. In this work we focus on dialoguebased deception, using global deception labels.
Global Segmentation
Previous work with the CXD corpus has focused on IPU-level and turn-level analysis and classification of local deception, mostly with acousticprosodic features (Levitan et al., 2015b;Mendels et al., 2017). Here we are interested in exploring global deception at the dialogue-level for the first time in this corpus. We define response-segments as sets of turns that are related to a single question (of the 24 interview questions). In order to annotate these segments, we first used a question detection and identification system (Maredia et al., 2017) that uses word embeddings to match semantically similar variations of questions to a target question list. This was necessary because interviewers asked the 24 questions using different wording from the original list of questions. On this corpus, (Maredia et al., 2017) obtained an F1score of .95%.
After tagging interviewer turns with this system, we labeled the set of interviewee turns between two interviewer questions q1 and q2 as corresponding to question q1. The intuition behind this was that those turns were responses to follow up questions related to q1, and while the question detection and identification system discussed above did not identify follow up questions, we found that most of the follow up questions after an interviewer question q1 would be related to q1 in our hand annotation. We evaluated this global segmentation on a hand-annotated test set of 17 interviews (about 10% of the corpus) consisting of 2,671 interviewee turns, 408 interviewer questions, and 977 follow up questions. Our global segmentation approach resulted in 77.8% accuracy on our hand-labeled test set (errors were mostly due to turns that were unrelated to any question).
We performed our analysis and classification on two segmentations of the data using this tagging method: (1) first turn: we analyzed only the single interviewee turn directly following the original question, and (2) multiple turns we analyzed the entire segment of interviewee turns that were responding to the original interviewer question and subsequent follow-up questions. In our classification experiments, we explore whether a deceptive answer is be better classified by the interviewee's initial response or by all of the follow-up conversation between interviewer and interviewee.
Features
LIWC Previous work has found that deceivers tend to use different word usage patterns when they are lying (Newman et al., 2003). We used LIWC (Pennebaker et al., 2001) to extract semantic features from each utterance. LIWC is a text analysis program that computes features consisting of normalized word counts for 93 semantic classes. LIWC dimensions have been used in many studies to predict outcomes including personality (Pennebaker and King, 1999), deception (Newman et al., 2003), and health (Pennebaker et al., 1997). We extracted a total of 93 features using LIWC 2015 2 , including standard linguistic dimensions (e.g. percentage of words that are pronouns, articles), markers of psychological processes (e.g. affect, social, cognitive), punctuation categories (e.g periods, commas), and formality measures (e.g. fillers, swear words). Linguistic We extracted 23 linguistic features 3 which we adopted from previous deception studies such as (Enos, 2009;Bachenko et al., 2008). Included in this list are binary and numeric features capturing hedge words, filled pauses, laughter, complexity, contractions, and denials. We include Dictionary of Affect Language (DAL) (Whissell et al., 1986) scores that measure the emotional meaning of texts, and a specificity score which measures level of detail (Li and Nenkova, 2015). The full list of features is:
'hasAbsolutelyReally', 'hasContraction', 'hasI', 'hasWe', 'hasYes', 'hasNAposT' (turns that contain words with the contraction "n't"), 'hasNo', 'hasNot', 'isJustYes', 'isJustNo', 'noYe-sOrNo', 'specificDenial', 'thirdPersonPronouns', 'hasFalseStart', 'hasFilledPause', 'numFilled-Pauses', 'hasCuePhrase', 'numCuePhrases', 'hasHedgePhrase', 'numHedgePhrases', 'hasLaugh', 'complexity', 'numLaugh', 'DALwc', 'DAL-pleasant', 'DAL-activate', 'DALimagery', 'specScores' (specificity score). Response Length Previous work has found that response length, in seconds, is shorter in deceptive speech, and that the difference in number of words in a segment of speech is insignificant between deceptive and truthful speech (DePaulo et al., 2003). For our question-level analysis, we used four different measures for response length: the total number of seconds of an interviewee responsesegment, the total number of words in an interviewee response-segment, the average response time of a turn in an interviewee response-segment, and the average number of words per turn in an interviewee response-segment. Individual Traits We analyzed gender and native language of the speakers to determine if these traits were related to ability to deceive and to detect deception. We also analyzed linguistic cues to deception across gender and native language, and used gender and native language information in our classification experiments. All speakers were either male or female, and their native language was either Standard American English or Mandarin Chinese. In addition, we used the NEO-FFI (5 factor) personality inventory scores as features in classification experiments, but not for the statistical analysis in this paper. Follow-up Questions Follow-up questions are questions that an interviewer asks after they ask a question from the original prescribed set of questions. We hypothesized that if an interviewer asked more follow-up questions, they were more likely to identify deceptive responses, because asking follow-up questions indicated interviewer doubt of the interviewee's truthfulness. For each interviewee response-segment, we counted the number of follow-up questions interviewees were asked by the interviewer.
Analysis
In order to analyze the differences between deceptive and truthful speech, we extracted the above features from each question response-segment, and calculated a series of paired t-tests between the features of truthful speech and deceptive speech. All tests for significance correct for family-wise Type I error by controlling the false discovery rate (FDR) at α = 0.05. The k th smallest p value is considered significant if it is less than k * α n . Table 1 shows the features that were statistically significant indicators of truth and deception in interviewee response-segments consisting of multiple turns. Below, we highlight some interesting findings.
Interviewee Responses
In contrast to (DePaulo et al., 2003), we found that the total duration of an interviewee responsesegment was longer for deceptive speech than for truthful speech. Additionally, while (DePaulo et al., 2003) showed that the number of words in a segment of speech was not significantly different between deceptive and truthful speech, we found that deceptive response-segments had more words than truthful response-segments. Furthermore, we found that longer average response time per turn and more words per sentence were significant indicators of deception. These results show that when interviewees are trying to deceive, not only is their aggregate response longer in duration and number of words, but their individual responses to each follow-up question are also longer. Consistent with (DePaulo et al., 2003), we found that more filled pauses in an interviewee responsesegment was a significant indicator of deception. Deceivers are hypothesized to experience an increase in cognitive load (Vrij et al., 1996), and this can result in difficulties in speech planning, which can be signaled by filled pauses. Although (Benus et al., 2006) found that, in general, the use of pauses correlates more with truthful than with deceptive speech, we found that filled pauses such as "um" were correlated with deceptive speech. The LIWC cogproc (cognitive processes) dimension, which includes words such as "cause", "know", "ought" was significantly more frequent in truthful speech, also supporting the theory that cognitive load is increased while practicing deception.
We found that increased DALimagery scores, which compute words often used in speech to create vivid descriptions, were indicators of deception. We also found that the LIWC language summary variables of authenticity and adjectives were indicators of deception: in an effort to sound more truthful and authentic, interviewees may have provided a level of detail that is uncharacteristic of truthful speech. Similarly, the specif icity metric was indicative of deception: deceptive responses contained more detailed language. Words in the LIWC clout category -a category describing words that indicate power of influence -were more prevalent in deceptive responses, suggesting that subjects sounded more confident while lying. Interrogatives were an indicator of deception. In the context of the interviewerinterviewee paradigm, these are interviewee questions to the interviewer. Perhaps this was a technique used to stall so that they had more time to develop an answer (e.g. "Can you repeat the question?"), or to deflect the interviewer's attention from their deception and put the interviewer on the spot. We observed that hedge words and phrases, which speakers use to distance themselves from a proposition, were more frequent in deceptive speech. This is consistent with Statement Analysis (Adams, 1996), which posits that hedge words are used in deceptive statements to intentionally create vagueness that obscures facts. Consistent with this finding, certainty in language (words such as "always" or "never") was a strong indicator of truthfulness.
It is also interesting to note the features that were not significant indicators of truth or decep-tion. For example, there was no significant difference in laughter frequency or apostrophes (used for contractions in this corpus) between truthful and deceptive responses.
When we compared indicators of truth vs. deception across multiple turns to indicators of truth vs. deception in just the first turns of interviewee response-segments, we found that, generally, indicators in first turns are a subset of indicators across multiple turns. In some cases there were interesting differences. For example, although tone (emotional tone -higher numbers indicate more positive, and lower indicate negative) was not a significant indicator of deception for the entire interviewee response-segment, negative tone was a moderate indicator of deception in first turns. This suggests that the tone of interviewees, when they have just started their lie, is different from when they are given the opportunity to expand on that lie. The findings from our analysis of first turns suggest that there might be enough information in the first response alone to distinguish between deceptive and truthful speech; we test this in our classification experiments in Section 6.
Interviewer Judgments of Deception
In addition to analyzing the linguistic differences between truthful and deceptive speech, we were interested in studying the characteristics of speech that is believed or disbelieved. Since the CXD corpus includes interviewer judgments of deception for each question asked, we have the unique opportunity to study human perception of deception on a large scale. Table 2 shows the features that were statistically significant indicators of truth and deception in interviewee responses -consisting of multiple turns -that were perceived as true or false by interviewers. Here we highlight some interesting findings. There were many features that were prevalent in speech that interviewers perceived as deceptive, which were in fact cues to deception. For example, speech containing more words in a response-segment and more words per sentence was generally perceived as deceptive by interviewers, and indeed, this perception was correct. Disbelieved answers had a greater frequency of filled pauses and hedge words, and greater specificity, all of which were increased in deceptive speech.
There were also several features that were indicators of deception, but were not found in higher rates in statements that were perceived as false. For example, the LIWC dimensions clout and certain were not significantly different in believed vs. disbelieved interviewee responses, but clout was increased in deceptive speech and certain language was increased in truthful speech. There were also features that were significantly different between believed and disbelieved statements, but were not indicators of deception. For example, statements that were perceived as false by interviewers had a greater proportion of specif icDenials (e.g. "I did not") than those that were perceived as true; this was not a valid cue to deception. Number of turns was increased in dialogue segments where the interviewer did not ultimately believe the interviewee response. That is, more follow up questions were asked when an interviewer did not believe their interlocutor's response, which is an intuitive behavior. When we compared indicators of speech that was perceived as deceptive across multiple turns to indicators of speech that was perceived as deceptive in just the first turns, we found that, generally, indicators in first turns are a subset of indicators across multiple turns.
On average, human accuracy at judging truth and deception in the CXD corpus was 56.75%, and accuracy at judging deceptive statements only was 47.93%. The average F1-score for humans was 46. Thus, although some cues were correctly perceived by interviewers, humans were generally poor at deception perception. Nonetheless, characterizing the nature of speech that is believed or not believed is useful for applications where we would ultimately like to synthesize speech that is trustworthy.
Gender and Native Language Differences in Deception Behavior
Having discovered many differences between deceptive and truthful language across all speakers, we were interested in analyzing differences in deceptive language across groups of speakers. Using gender and native language (English or Mandarin Chinese) as group traits, we conducted two types of analysis. First, we directly compared deception performance measures (ability to deceive as interviewee, and ability to detect deception as interviewer) between speakers with different traits, to assess the effect of individual characteristics on deception abilities. In addition, we compared the features of deceptive and truthful language in sub- Gender-specific and language-specific indicators of deception and truth. We consider a result to approach significance if its uncorrected p value is less than 0.05 and indicate this with () in the table.
sets of the corpus, considering only people with a particular trait, in order to determine groupspecific patterns of deceptive language. As before, tests for significance correct for family-wise Type I error by controlling the false discovery rate (FDR) at α = 0.05. The k th smallest p value is considered significant if it is less than k * α n .
Gender
There were no significant differences in deception ability between male and female speakers. However, there were many differences in language between male and female speakers. Further, some features were only discriminative between deception and truth for a specific gender. Table 3 shows linguistic features that were significantly different between truthful and deceptive speech, but only for one gender. In some cases the feature was found in different proportions in male and females, and in other cases there was no significant difference. For example, f amily words were indicative of deception only in female speakers, and these words were also used more frequently by female speakers than male speakers.
The LIWC category of compare was also indicative of deception for females only, and this feature was generally found more frequently in female speech. Article usage was only significantly different between truthful and deceptive speech in females (more articles were found in deceptive speech), but articles were used more frequently in male speech. On the other hand, the LIWC category of posemo (positive emotion) was increased in truthful speech for male speakers only, and there was no significant difference of posemo frequency across gender.
Native Language
Interviewees were more successful at deceiving native Chinese speakers than at deceiving native English speakers (t(170) = −2.13, p = 0.033). This was true regardless of interviewee gender and native language, and slightly stronger for female interviewers (t(170) = −2.22, p = 0.027). When considering only female interviewers, interviewees were more successful at deceiving nonnative speakers than native speakers, but this difference was not significant when considering only male interviewers. As with gender, there were several features that were discriminative between deception and truth for only native speakers of English, or only native speakers of Mandarin. Table 3 shows LIWC categories and their relation to deception, broken down by native language. For example, power words were found more frequently in deception statements, when considering native English speakers only. In general, power words were used more by native Mandarin speakers than by native English speakers. LIWC categories of compare, relative, and swear were more prevalent in deceptive speech, only for English speakers. On the other hand, f eel and perception dimensions were only indicators of deception for native Mandarin speakers, although there was no significant difference in the use of these word categories across native language. Inf ormal and netspeak word dimensions tended to be more frequent in truthful speech for native Chinese speakers only (approaching significance), and these word categories were generally more frequent in native Mandarin speech. Finally, f iller words tended to be more frequent in deceptive speech (approaching significance) only for native Mandarin speakers, and these were used more frequently by native Mandarin speakers than native English speakers.
Overall, our findings suggest that deceptive behavior in general, and deceptive language in particular, are affected by a person's individual characteristics, including gender and native language. When building a deception classification system, it is important to account for this variation across speaker groups.
Deception Classification
Motivated by our analysis showing many significant differences in the language of truthful and deceptive responses to interview questions, we trained machine learning classifiers to automatically distinguish between truthful and deceptive text, using the feature sets described in section 4. We compared classification performance for the two segmentation methods described in section 3.2: first turn and multiple turns. This allowed us to explore the role of context in automatic deception detection. When classifying interviewee response-segments, should the immediate response only be used for classification, or is inclusion of surrounding turns helpful? This has implications not only for deception classification, but for practitioners as well. Should human interviewers make use of responses to follow up questions when determining response veracity, or should the initial response receive the most consideration?
We compared the performance of 3 classification algorithms: Random Forest, Logistic Regression, and SVM (sklearn implementation). In total, there were 7,792 question segments for both single turn and multiple turns segmentations. We divided this into 66% train and 33% test, and used the same fixed test set in experiments for both segmentations in order to directly compare results. The random baseline performance is 50, since the dataset is balanced for truthful and deceptive statements. Another baseline is human performance, which is 46.0 F1 in this corpus. The Random For-est classifier was consistently the best performing, and we only report those results due to space constraints. Table 4 displays the classification performance for each feature set individually, as well as feature combinations, for both single turn and multiple turn segmentations. It also shows the human baseline performance, obtained from the interviewers' judgments of deception in the corpus, which were made after asking each question along with related follow-up questions (i.e. multiple turn segmentation).
The best performance (72.74 F1-score) was obtained using LIWC features extracted from multiple turns. This is a 22.74% absolute increase over the random baseline of 50, and a 26.74% absolute increase over the human baseline of 46. The performance of classifiers trained on multiple turns was consistently better than those trained on single turns, for all feature sets. For multiple turns, LIWC features were better than the lexical feature set, and combining lexical with LIWC features did not improve over the performance of LIWC features alone. Adding individual traits information was also not beneficial. However, when considering the first turn only, the best results (70.87 F1-score) were obtained using a combination of LIWC+lexical+individual features. Using the first turns segmentation, lexical features were slightly better than LIWC features, and interestingly, adding individual traits helped both feature sets. A combination of LIWC and lexical features was better than each on its own.
These results suggest that contextual informa-tion, in the form of follow up questions, is beneficial for deception classification. It seems that individual traits, including gender, native language, and personality scores, are helpful in deception classification under the condition where contextual information is not available. When the contextual information is available, the the additional lexical content is more useful than individual traits.
Conclusions and Future Work
In this paper we presented a study of deceptive language in interview dialogues. Our analysis of linguistic characteristics of deceptive and truthful speech provides insight into the nature of deceptive language. We also analyzed the linguistic characteristics of speech that is perceived as deceptive and truthful, which is important for understanding the nature of trustworthy speech. We explored variation across gender and native language in linguistic cues to deception, highlighting cues that are specific to particular groups of speakers. We built classifiers that use combinations of linguistic features and individual traits to automatically identify deceptive speech. We compared the performance of using cues from the single first turn of an interviewee response-segment with using cues from the full context of multiple interviewee turns, achieving performance as high as 72.74% F1-score (about 27% better than human detection performance). This work contributes to the critical problem of automatic deception detection, and increases our scientific understanding of deception, deception perception, and individual differences in deceptive behavior. In future work, we plan to conduct similar analysis in additional deception corpora in other domains, in order to identify consistent domain-independent deception indicators. In addition, we plan to conduct cross-corpus machine learning experiments, to evaluate the robustness of these and other feature sets in deception detection. We also would like to explore additional feature combinations, such as adding acoustic-prosodic features. Finally, we plan to conduct an empirical analysis of deception behavior across personality types.
Table 1 :
1Statistically significant indicators of truth and deception in interviewee response-segments consisting of multiple turns related to a single question.Feature
Deception
Truth
Neutral
Lexical
DAL.imagery, DAL.pleasant
DAL.actvate complexity, DAL.wc
numCuePhrases, numFilledPauses isJustNo
isJustYes, noYesOrNo
numHedgePhrases, specificDenial
numLaugh
specScores, thirdPersonPronoun
LIWC
adverb, article,authentic, body
negate
apostro, bio, cause
conj, focuspast, interrog, ipron
certain, clout, cogproc, compare
prep, pronoun, WC, WPS
discrep, focusfuture, function
insight, money, motion
negemo, nonflu, number
posemo, ppron, relative
Response length num words
response length
avg num words
avg response length
Followup
num turns
Table 2 :
2Statistically significant indicators of perceived truth and deception in interviewer judgments of interviewee responses.
Table 3 :
3
Table 4 :
4Random Forest classification of single turn and multiple turn segmentations, using text-based features and individual traits (gender, native language, NEO-FFI personality scores).
https://www.mturk.com/mturk/
A full description of the features is found here: https: //s3-us-west-2.amazonaws.com/downloads. liwc.net/LIWC2015_OperatorManual.pdf 3 A detailed explanation of these linguistic features and how they were computed is found here: http://www.cs. columbia.edu/speech/cxd/features.html
AcknowledgmentsThis work was partially funded by AFOSR FA9550-11-1-0120 and by NSF DGE-11-44155. Thank you to Bingyan Hu for her assistance with feature extraction. We thank the anonymous reviewers for their helpful comments.
Statement analysis: What do suspects' words really reveal. H Susan, Adams, FBI L. Enforcement Bull6512Susan H Adams. 1996. Statement analysis: What do suspects' words really reveal. FBI L. Enforcement Bull. 65:12.
Verification and implementation of language-based deception indicators in civil and criminal narratives. Joan Bachenko, Eileen Fitzpatrick, Michael Schonwetter, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational Linguistics1Association for Computational LinguisticsJoan Bachenko, Eileen Fitzpatrick, and Michael Schonwetter. 2008. Verification and implemen- tation of language-based deception indicators in civil and criminal narratives. In Proceedings of the 22nd International Conference on Computa- tional Linguistics-Volume 1. Association for Com- putational Linguistics, pages 41-48.
Pauses in deceptive speech. Stefan Benus, Frank Enos, Julia Hirschberg, Elizabeth Shriberg, Speech Prosody. 18Stefan Benus, Frank Enos, Julia Hirschberg, and Eliza- beth Shriberg. 2006. Pauses in deceptive speech. In Speech Prosody. volume 18, pages 2-5.
Neo five-factor inventory (neo-ffi). Odessa, FL: Psychological Assessment Resources. P T Costa, Mccrae, PT Costa and RR McCrae. 1989. Neo five-factor in- ventory (neo-ffi). Odessa, FL: Psychological As- sessment Resources .
Cues to deception. M Bella, James J Depaulo, Brian E Lindsay, Laura Malone, Kelly Muhlenbruck, Harris Charlton, Cooper, American Psychological Association, IncBella M DePaulo, James J Lindsay, Brian E Mal- one, Laura Muhlenbruck, Kelly Charlton, and Harris Cooper. 2003. Cues to deception. American Psy- chological Association, Inc. pages 74-118.
Detecting deception in speech. Frank Enos, Ph.D. thesis, CiteseerFrank Enos. 2009. Detecting deception in speech. Ph.D. thesis, Citeseer.
Syntactic stylometry for deception detection. Song Feng, Ritwik Banerjee, Yejin Choi, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Song Feng, Ritwik Banerjee, and Yejin Choi. 2012. Syntactic stylometry for deception detection. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics: Short Papers-Volume 2. Association for Computational Linguistics, pages 171-175.
Automatic deception detection in italian court cases. Tommaso Fornaciari, Massimo Poesio, Artificial intelligence and law. 213Tommaso Fornaciari and Massimo Poesio. 2013. Au- tomatic deception detection in italian court cases. Artificial intelligence and law 21(3):303-340.
Cross-cultural production and detection of deception from speech. Guzhen Sarah I Levitan, Mandi An, Gideon Wang, Julia Mendels, Michelle Hirschberg, Andrew Levine, Rosenberg, Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection. the 2015 ACM on Workshop on Multimodal Deception DetectionACMSarah I Levitan, Guzhen An, Mandi Wang, Gideon Mendels, Julia Hirschberg, Michelle Levine, and Andrew Rosenberg. 2015a. Cross-cultural produc- tion and detection of deception from speech. In Pro- ceedings of the 2015 ACM on Workshop on Multi- modal Deception Detection. ACM, pages 1-8.
Cross-cultural production and detection of deception from speech. Guzhen Sarah I Levitan, Mandi An, Gideon Wang, Julia Mendels, Michelle Hirschberg, Andrew Levine, Rosenberg, Proceedings of the 2015 ACM on Workshop on Multimodal Deception Detection. the 2015 ACM on Workshop on Multimodal Deception DetectionACMSarah I Levitan, Guzhen An, Mandi Wang, Gideon Mendels, Julia Hirschberg, Michelle Levine, and Andrew Rosenberg. 2015b. Cross-cultural produc- tion and detection of deception from speech. In Pro- ceedings of the 2015 ACM on Workshop on Multi- modal Deception Detection. ACM, pages 1-8.
Fast and accurate prediction of sentence specificity. Jessy Junyi, Ani Li, Nenkova, Proceedings of the Twenty-Ninth Conference on Artificial Intelligence (AAAI). the Twenty-Ninth Conference on Artificial Intelligence (AAAI)Junyi Jessy Li and Ani Nenkova. 2015. Fast and accu- rate prediction of sentence specificity. In Proceed- ings of the Twenty-Ninth Conference on Artificial In- telligence (AAAI). pages 2281-2287.
Comparing approaches for automatic question identification. S Angel, Kara Maredia, Schechtman, Julia Sarah I Levitan, Hirschberg, SEMAngel S Maredia, Kara Schechtman, Sarah I Levitan, and Julia Hirschberg. 2017. Comparing approaches for automatic question identification. SEM.
Hybrid acoustic-lexical deep learning approach for deception detection. Gideon Mendels, Sarah Ita Levitan, Kai-Zhan Lee, Julia Hirschberg, Proc. Interspeech. InterspeechGideon Mendels, Sarah Ita Levitan, Kai-Zhan Lee, and Julia Hirschberg. 2017. Hybrid acoustic-lexical deep learning approach for deception detection. Proc. Interspeech 2017 pages 1472-1476.
The lie detector: Explorations in the automatic recognition of deceptive language. Rada Mihalcea, Carlo Strapparava, Proceedings of the ACL-IJCNLP 2009 Conference Short Papers. the ACL-IJCNLP 2009 Conference Short PapersAssociation for Computational LinguisticsRada Mihalcea and Carlo Strapparava. 2009. The lie detector: Explorations in the automatic recognition of deceptive language. In Proceedings of the ACL- IJCNLP 2009 Conference Short Papers. Association for Computational Linguistics, pages 309-312.
Lying words: Predicting deception from linguistic styles. James W Matthew L Newman, Diane S Pennebaker, Jane M Berry, Richards, Personality and social psychology bulletin. 295Matthew L Newman, James W Pennebaker, Diane S Berry, and Jane M Richards. 2003. Lying words: Predicting deception from linguistic styles. Person- ality and social psychology bulletin 29(5):665-675.
Finding deceptive opinion spam by any stretch of the imagination. Myle Ott, Yejin Choi, Claire Cardie, Jeffrey T Hancock, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies1Association for Computational LinguisticsMyle Ott, Yejin Choi, Claire Cardie, and Jeffrey T Hancock. 2011. Finding deceptive opinion spam by any stretch of the imagination. In Proceed- ings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies-Volume 1. Association for Computa- tional Linguistics, pages 309-319.
Linguistic inquiry and word count: Liwc. Martha E James W Pennebaker, Roger J Francis, Booth, Mahway: Lawrence Erlbaum Associates. 71James W Pennebaker, Martha E Francis, and Roger J Booth. 2001. Linguistic inquiry and word count: Liwc 2001. Mahway: Lawrence Erlbaum Asso- ciates 71:2001.
Linguistic styles: language use as an individual difference. W James, Laura A Pennebaker, King, Journal of personality and social psychology. 7761296James W Pennebaker and Laura A King. 1999. Lin- guistic styles: language use as an individual differ- ence. Journal of personality and social psychology 77(6):1296.
Linguistic predictors of adaptive bereavement. W James, Pennebaker, J Tracy, Martha E Mayne, Francis, Journal of personality and social psychology. 724863James W Pennebaker, Tracy J Mayne, and Martha E Francis. 1997. Linguistic predictors of adaptive be- reavement. Journal of personality and social psy- chology 72(4):863.
Deception detection using real-life trial data. Verónica Pérez-Rosas, Mohamed Abouelenien, Rada Mihalcea, Mihai Burzo, Proceedings of the 2015 ACM on International Conference on Multimodal Interaction. the 2015 ACM on International Conference on Multimodal InteractionACMVerónica Pérez-Rosas, Mohamed Abouelenien, Rada Mihalcea, and Mihai Burzo. 2015. Deception de- tection using real-life trial data. In Proceedings of the 2015 ACM on International Conference on Mul- timodal Interaction. ACM, pages 59-66.
Experiments in open domain deception detection. Verónica Pérez, -Rosas , Rada Mihalcea, Proceedings of EMNLP 2015. ACL. EMNLP 2015. ACLVerónica Pérez-Rosas and Rada Mihalcea. 2015. Ex- periments in open domain deception detection. In Proceedings of EMNLP 2015. ACL, pages 1120- 1125.
The voice of deceit: Refining and expanding vocal cues to deception. Patricia Rockwell, B David, Judee K Buller, Burgoon, Communication Research Reports. 144Patricia Rockwell, David B Buller, and Judee K Bur- goon. 1997. The voice of deceit: Refining and ex- panding vocal cues to deception. Communication Research Reports 14(4):451-459.
Insight into behavior displayed during deception. Aldert Vrij, Ray Gun R Semin, Bull, Human Communication Research. 224Aldert Vrij, Gun R Semin, and Ray Bull. 1996. Insight into behavior displayed during deception. Human Communication Research 22(4):544-562.
A dictionary of affect in language: Iv. reliability, validity, and applications. Cynthia Whissell, Michael Fournier, René Pelland, Deborah Weir, Katherine Makarec, Perceptual and Motor Skills. 623Cynthia Whissell, Michael Fournier, René Pelland, Deborah Weir, and Katherine Makarec. 1986. A dictionary of affect in language: Iv. reliability, va- lidity, and applications. Perceptual and Motor Skills 62(3):875-888.
Automatic detection of deception in child-produced speech using syntactic complexity features. Maria Yancheva, Frank Rudzicz, ACL (1). Maria Yancheva and Frank Rudzicz. 2013. Automatic detection of deception in child-produced speech us- ing syntactic complexity features. In ACL (1). pages 944-953. |
14,363,654 | Similarities and Differences among Semantic Behaviors of Japanese Adnominal Constituents | This paper treats the classification of the semantic functions performed by adnominal constituents in Japanese, where many parts of speech act as adnominal constituents. In order to establish a formal treatment of the semantic roles, the similarities and differences among adnominal constituents, i.e. adjectives and "noun + NO (in English "of + noun")" structures, which have a broad range of semantic functions, are discussed. This paper also proposes an objective method of classifying these constructs using a large amount of linguistic data. The feasibility of this was verified with a selforganizing semantic map based on a neural network model. | [
8806715,
3266611,
15792191
] | Similarities and Differences among Semantic Behaviors of Japanese Adnominal Constituents
Kyoko Kanzaki
Communications Research Laboratory
Iwaoka-cho, Nishi-ku588-2, 651-2492Iwaoka, KobeJapan
Qing Ma
Communications Research Laboratory
Iwaoka-cho, Nishi-ku588-2, 651-2492Iwaoka, KobeJapan
Hitoshi Isahara
Communications Research Laboratory
Iwaoka-cho, Nishi-ku588-2, 651-2492Iwaoka, KobeJapan
Similarities and Differences among Semantic Behaviors of Japanese Adnominal Constituents
This paper treats the classification of the semantic functions performed by adnominal constituents in Japanese, where many parts of speech act as adnominal constituents. In order to establish a formal treatment of the semantic roles, the similarities and differences among adnominal constituents, i.e. adjectives and "noun + NO (in English "of + noun")" structures, which have a broad range of semantic functions, are discussed. This paper also proposes an objective method of classifying these constructs using a large amount of linguistic data. The feasibility of this was verified with a selforganizing semantic map based on a neural network model.
Introduction
Pustejovsky (Pustejovsky, 1995) proposed the theory of a generative lexicon as a framework by which meanings of words are expressed in one unified representation. This kind ofgenerativity would be very useful for NLP, especially if it is applicable to the complex semantic structures represented by various modification relations. In our previous research on adjectives (Isahara and Kanzaki, 1999) we used Pustejovsky's theory to classify adjectives in Japanese. In this paper we take the first steps in a similar classification of the Japanese "noun + NO" construction. Bouillon (Bouillon, 1996) applied this theory to the adnominal constituent of mental states. Saint-Dizier (Saint-Dizier, 1998) discussed adjectives in French. Isahara and Kanzaki (Isahara and Kanzaki, 1999) treated a much wider range of phenomena of adnominal constituents. They classified the semantic roles of adnominal constituents in .Japanese. where many parts of speech act as adnominal constituents, and discussed a for-mal treatment of their semantic roles. In their research, adnominal constituents, mainly adjectives which function as adverbials, are discussed. The present paper describes the similarities and differences among adnominal constituents, i.e. adjectives and "noun + NO t (in English "of + noun")" structures which have a broad range of semantic functions. This paper proposes an objective method for classifying these structures using a large amount of linguistic data. The feasibility of this was verified with a self-organizing semantic map based on a neural network model.
In section 2, we explain the semantic functions performed by "noun + NO." In section 3, we discuss how we can semi-automatically obtain and classify examples of adjectives and "noun + NO" structures which have similar semantic functions. In section 4, we introduce a self-organizing semantic map to verify the result of this classification. In section 5, we discuss similarities and differences between adjectives and "noun + NO" structures.
The Diversity of Semantic
Relations between "noun -t-NO" and their Head Nouns Among Japanese adnominal constituents, " noun + NO" represents a wider range of semantic relations than other adnominal constituents. Therefore, "noun + NO" does not always behave like the other adnominal constituents. In previous work, some researchers have analyzed semantic relations between the noun in the "noun + NO" structure and its head noun (Shimazu et al., 1986). Here, we show several examples that demonstrate the diversity of the sel "NO" is a Japanese postpositiona| which can represent a wide range of semantic relations. It is similar to "of" in English. These semantic relations between "noun + NO" structures and their head nouns are different than those between other adnominal constituents, e.g. adjectives and their head nouns. However, some "noun + NO" behavior is similar to the behavior of adjectives and nominal adjectivals. In these cases "noun + NO" seems not to differ semantically from adjectives and nominal adjectivals. Let us consider the English examples:
financial world / world of finance ("ZAIKAI") industrial center / center of industry ("SANGYOU NO CHUUSHIN") In this case "noun + NO" need not be distinguished from an adjective with respect to semantic behavior. However, in the following examples it is necessary to distinguish them from one another. global center / center of tile globe ("SEKAI NO CHUUSHIN / CHIKYUU NO CHUUSHIN")
We do not have a discrimination criteria that automatically recognizes whether a "noun + NO" structure is similar in its semantic behavior to that of adjectives or not. We have attempted to gather, semi-automatically, nolms in the "n(mn + NO" structure which behave like adjectives. One meaning of "KIMOCHI (feeling)" represents the semantic element <mental state>. In the above examples, the adjective, "KANASHII (sad)", and "noun + NO", "YOROKOBI NO (of delight)", represent the concrete contents of their head noun "KIMOCHI (feeling)", i.e. they also represent the mental state: "feeling". Therefore, even though they belong to different parts of speech (adjective/noun), they must be classified in the same semantic category since both carry the same meaning. Neither the adjective, "KANASHII (sad)", nor the "noun + NO", "YOROKOBI NO (of delight)", can appear in predicative position without changing their meaning.
However In the above examples, the noun in "noun + NO", "JOHN", does not include the concept, <mental state>, so it cannot represent the content of "KIMOCHI (feeling)." The adjective, "KANASHII (sad)", and the noun in the "noun + NO", "JOHN" do not embody the same concept and have a different semantic relation with their head noun. We cannot find the semantic similarities between "KANASHII (sad)" and "JOHN" that we could between "YOROKOBI NO (of delight)" and "KANASHII (sad)." We focus on the phenomena where adnominal constituents represent the concrete contents of their head nouns. This makes it possible to identify adjectives and "noun + NO" structures which are similar in semantic behavior to the referents of their head nouns. These expressions are extracted semi-automatically from large corpora.
How to Extract the Necessary Information
When we collect words which have some similarities, it is difficult to select the semantic axis for classification by making use of only the co-occurring words. In collecting similar words, some previous research took not only cooccurring words but also the context of these words into account (Grefenstette, 1994). One of the important points of our analysis is the introduction of the distinct semantic elements that both "noun + NO" structures and adjectivals (adjectives and nominals) have in common with their head nouns. We wanted to ascertain the similarities between "noun + NO" and other adnominal constituents based on these common semantic elements. For this reason, we used the semantic relations, in which adnominal constituents represent the concrete content of their head nouns, as a key to classification. We automatically 2 extracted these relations from one year of newspaper articles from Mainichi Shimbun (1994), 100 novels from Shincho publishers and 100 books covering a variety of topics. We used the following procedure to extract the necessary information.
Step 1) Extract from the corpora, all nouns which are preceded by the Japanese expression "TOIU" which is something like "that" or "of." "TOIU + noun (noun that/of ...)" is a typical ,Japanese expression which introduces some in-2Only
Step 3) is done manually.
formation about the referent of the noun, such as apposition. Therefore, nouns found in this pattern may have their content elucidated by means of their modifiers.
Step 2) Extract from the corpora, all "noun + NO" structures, adjectives and nominal adjectivals which modify the nouns extracted in step 1.
NB, the relationships between adnominal constituents and their modified nouns extracted here include not only representations of the contents of the noun, but also other various relations.
Step 3) Extract "noun + NO" structures, adjectives and nominal adjectivals which represent the contents of the referents of the modified nouns.
Step 3 is done manually.
Step 4) In order to find the distribution of their semantic categories and analyze the semantic similarities between "noun + NO" and other adnominal constituents in each semantic category, we clustered the modified nouns automatically. This clustering was based on sets of similar adnominal constituents which represent the content of the referent of the modified noun. We can gather similar modified nouns when we classify the modified nouns according to the similarities of the adnominal constituents, because in our data both adnominal constituents and their modified nouns have the same semantic elements in common that we mentioned above. We attempted to construct the Semantic Map of the modified nouns gathered by the abovementioned method by using the self-organizing system of the neural network model (Ma et al., 2000). We suppose that both modified nouns and adnominal constituents have common sernantic elements when adnominal constituents represent the concrete content of their head nouns. If this is true, nouns with similar meanings are located near each other oil the semantic map, self-organized by the similarities of semantic elements among the adnominal constituents. The result of our experiment verified this supposition ( Figure I). The nouns with a similar meaning are located near each other on the map and we could divide the distribution of the modified nouns into seven categories ( Figure 2).
Each group, i.e. the "mental state" group, "state/ situation" group, "characteristics" group, "range/ area" group, "viewpoint/ standpoint" group, "aspect" group, and "others," represents a meaning held in common by nouns in the group. Mental state can be further divided into the state of emotion, mood and intention. As we analyze the adnominal constituents in each category of modified nouns, we can find the possibility of the co-occurrence of an adnominal constituent with a head noun. Table 1 shows examples of adjectives and nouns in "noun + NO" structures in each group. In the mental state, state/situation, aspect and characteristics groups~ adjectives appear more frequently than "noun + NO" constructions. These are simple adjectives. Ill the range/area and viewpoint/standpoint groups, "noun + NO" structures appear more frequently than simple adjectives. Nominal adjectivals derived from nouns plus the suffix "TEKIna" appear often with these noun groups. Most nouns in the groups "mental state: emotion", "state/situation" and "characteristics", contain abstract nouns which represent emotions, situations or characteristics. There are few concrete nouns. However, in the groups "range/area" and "viewpoint/standpoint', there are many concrete nouns which represent natural phenomena, organizations or professional domains and few abstract nouns. We can find differences among "noun + NO" structures, that is, there are adjectives which behave like nouns semantically and there are nouns which behave semantically like adjectives.
UT~ ®
5 The semantic behavior of the "noun -t-NO" structure which is similar to that of adjectives 5.1 Types of nouns in the "noun -'t-NO" structure As we mentioned in section 3, we extracted the "noun + NO" structures which have the same semantic element, along with similar adjectives, from large corpora. For example, KIKEN_NA JOUTAI (dangerous) (situation) dangerous situation
In this case "dangerous" represents the state concretely.
MIKETTEI NO JOUTAI (indecision) (of) (situation) a situation of indecision
In this case, the "MIKETTEI NO (of indecision)" also represents the state concretely. Here, both "KIKENN_NA (dangerous)" and "MIKETTEI NO (of indecision)" have tile same semantic element "state" in common. We find that a "situation" can be represented by both an adjective and the "noun + NO" structure. When "MIKETTEI NO (of indecision)" cooccurs with modified nouns other than "situation", it mostly represents the semantic notion, e.g. "MIKETTEI NO MONDAI (a problem of indecision)", and so on. That is,"MIKETTEI NO (of indecision)," represents the situation of a problem. So we see that "MIKETTEI NO (of indecision)" is in itself like an adjective.
On the other hand, "KUMORI NO (cloudiness)" behaves sometimes like an adjective and sometimes not.
KUMORI
NO JOUTAI (cloudiness) (of) (state) a state of cloudiness The semantic behavior of "KUMORI NO (of cloudiness)" is like the behavior of adjectives in that the cloudiness represents the state as "KIKEN_NA (dangerous)," however, "KU-MORI NO (of cloudiness)" does not always represent the state of the referent of the modified noun though "MIKETTEI NO (of indecision)" always represents that.
"KUMORI (cloudiness)" is a natural phenomenon which can be pointed to concretely. For example,
KUMORI
NO NISSU (cloudiness) (of) (amount) WA 4 GATU NI SITEWA IJOU DA.
The amount of cloudiness is unusual for April.
In this example, "KUMORI NO (of cloudiness)" modifies "NISSU (the amount)," and does not represent a state but the possessor of the amount.
As the examples of "MIKETTEI NO (of indecision)" and "KUMORI NO (of cloudiness)" show, there are nouns which have the same properties as adjectives intrinsically (e.g. "MIKETTEI (indecision)"), and other nouns which intrinsically have different properties from adjectives (e.g. "KUMORI (cloudiness)"). So, it is important to consider the properties of the noun in "noun + NO" when we analyze the "noun + NO" which behaves semantically like an adjective. Such an analysis enables us to find the situation in which they act like adjectives. We classified nouns in "noun + NO" structures into three types based on what the nouns refer to. Nouns from the last category, 3), are similar to adjectives semantically. As adjectives do not represent concrete objects or verb-like notions, nouns from these categories only occasionally resemble adjectives. 2) nominalizations (like decision, work, and so on)
3) nouns which belong to neither 1) nor 2), e.g. abstract nouns and so on.
As our corpora contain mainly newspaper articles, many compound nouns appear. Since the last word in a compound noun determines the properties of the whole word, we focus on the last word in classifying them. Table 2 contains examples of the noun categories. "KOUGYOU TOSHI (industry city)" is an example of a compound noun where the last word "TOSHI (city)" determines the properties. 3) nouns which belong to neither 1) nor 2) MUTONTYAKU, JAKUSHOU (carelessness) (weakness)
In the following section, we analyze the similarities and differences of the semantic behavior of "noun + NO" structures and adjectives. Firstly, we describe the case in which the semantic behavior of "noun + NO" is similar to that of adjectives and then we mention the case in which the semantic behavior of "noun + NO" is different from that of adjectives. Secondly, we analyze several types of nouns in "noun + NO" which behave like adjectives, ewm though nouns in "noun + NO" are not intrinsically similar to adjectiw; types.
The differences of semantic behavior between nouns in "noun -b NO" and adjectives
For example, "KANASHII (sad)", "URESHII (pleasurable)", "ZANNEN_NA (regrettable)", "KANASHIMI NO (of sadness)", "YOROKOBI NO (of delight)" and so on, modify nouns such as "OMOI (thought)", "KANJI (emotion)" and so on. Using a set of adnominal constituents, such as "KANASHII (sad)", "URESHII (pleasurable)", "ZANNEN..NA (regrettable)", as keys for classification, we can classify the modified nouns, "OMOI (thought)", "KANJI (feeling)" and so on, into the same group. Then we can find a semantic relationship between these adnominal constituents and their head nouns, in this case, <emotion>. In the following, we describe the similar and differing semantic behaviors of "noun ÷ NO" and other adjectives in the same semantic category. As we described in the previous section, we extract sets of "noun + NO" structures and adjectives from data which was sorted semantically. Words in each set represent the semantic substance of the similar nouns which they modify. Therefore, their semantic categories are similar. Examples of modified nouns of a similar semantic category and their modifiers which have a semantic category similar to that of the nouns are listed in Table 3. Included are some "noun ÷ NO" examples which though cooccurring with <mental state> nouns are not classified as such themselves. There are many adjectives and nominal adjectivals which can modify nouns in Table 3, such as "AWARENA (poor)", "IJIRASHII (moving)" and "HOKO-RASHII (triumphant)." Some "noun ÷ NO" structures are semantically similar to these adjectives since they represent the contents of the emotion, e.g. "FUKAI NO KAN (sensation of displeasure)" and "YOROKOBI NO KIMOCHI (feeling of delight)." Most nouns in these "noun + NO" structures in Table 3 are classified into "mental activity by humans" by the "Word List Classified by Semantic Principles3. '' "Noun + NO" structures, which have this kind of semantic; category, are similar to adjectives and nominal adjectivals, as both represent the content of the human mind. We call this semantic cat-'~This list was compiled by The Natural Language Research Institute, Tokyo. On the other hand, some adnominal relationships concerning a mental state can only be represented by "noun + NO" structures, such as "HOSHIN NO KIMOCHI (desire of defending one's own interest)," "CHIKUZAI NO NEN (thought of moneymaking)" and "INTAI NO KIMOCHI (idea of retirement)." Event nouns are mainly used in these "noun + NO" structures. Adnominal modifying relations of "nominalization + NO + mental state_noun" structures represent an intentional mental state. This kind of intentional mental state cannot be expressed by adjectives. We call this semantic category "Intentional mental state."
We discussed two types of semantic representations above, i.e. Feeling and Intentional mental state. Feeling can be represented by adjectives and "noun + NO" structures. However, Intentional mental state can be represented only by "noun + NO" structures. From the standpoint of the characteristics of the modified nouns (they represent human mental states), these two mental activities (Feeling and Intentional mental state) are similar, even though there are .differences in whether the activity is intentional or not. However, from the standpoint of the selection of an adnominal relationship in the surface structure, whether the activity has active intention or not will be the deciding factor for the selection between adjectives and "noun + NO" structures.
The case where the semantic behavior of "noun + NO" structures is similar to that of adjectives
Here we focus on nouns whose properties are unlike those of adjectives, i.e. the nouns which refer to concrete objects, verbal notions and so on.
(1) In the case where "noun + NO" represents characteristics, there is some overlap between the semantic behavior of adjectives and "noun + NO" structures.
I) The case where the noun in "noun + NO" is a compound noun Let us compare "noun + NO" with adjective usage. In the previous two examples, the differences between "noun + NO" and adjectives depend only on whether the nouns they modify represent a person or a city where both head nouns have characteristics in common. However, "KOUGYOUTOSHI (industry city)" does not always have the same semantic relation to the modified noun, as seen in the following example:
MUKUCHI_NA
KOUGYOUTOSHI NO YUKYUTI (industry city) (of) (vacant land) NI TYAKUMOKU. They noticed the vacant land in the industrial city.
In this example, the semantic relation between "KOUGYOUTOSHI NO (of industry city)" and "YUKYUTI (the vacant land)" indicate the relation of possession so that it is not a semantic relation that adjectives can represent. When the modified nouns are abstract nouns that represent the property ("INSHOU (impression)" or "SEIKAKU (characteristics)" etc.), or instances of the concrete nouns in compound nouns ("KAWASAKI SHI (KAWASAKI city)"), the semantic function of compound nouns in "noun + NO" constructions represent the characteristics of the referent of the modified nouns as adjectives do. a) Modified nouns which are abstract nouns that represent a property.
KOUGYOUTOSHI NO IMEJI (industry city) (of) (image) GA OOKII. The image of an industrial city is strong.
KOUKYUUHIN
NO INSHOU (high quality item) (of) (impression) GA TUYOI SHANERU (with) CHANNEL the impression of a high-quality item is strong.
4Note that some words which are nouns in Japanese (e.g. industry, high quality)must be translated as adjec-tiw~ in English (e.g. industrial, high-quality) <city-SUZUKA-SHI> KOUGYOUTOSHI NO SUZUKA SHI (industry city) (of) (SUZUKA city) SUZUKA city which is an industrial city <item-diamonds> KOUKYUUHIN NO DAIYA (high quality item) (of) (diamond) Diamonds are a high-quality item <company-IBM> YURYOUGAISHA NO (excellent company) (of) IBM is an excellent company IBM When the modified noun is an instance of the last word of the modifying compound noun, the semantic function of the whole compound noun is similar to that of adjectives because, in this type of compound, we focus on the adjectival semantic element. For example, "KOUGYOU (industry)" in "KOUGYOUTOSHI (industry city)", "KOUKYUU (high-quality)" in "KOUKYU-UHIN (high quality item)", and "YUURYOU (excellent)" in "YUURYOUGAISHA (excellent company)" are adjectival.
II) the nouns that refer to the concrete object in "noun + NO"
Originally the nouns that refer to a concrete object or event do not have the same meaning as adjectives, however, they have similar semantic behavior to that of adjectives in the following case.
KARE WA OTONASHII KIHUU (mild) (disposition) NO MOTINUSHI DA. He has a mild disposition.
The "mild" represents the characteristic (disposition). In the following examples the "noun + NO" also indicate the characteristics of something.
4
The Semantic Map of the Modified Nouns Constructed by the Self-Organizing System of the Neural Network Model
Figure 1 :Map 1 Figure 2 :
112Semantic Semantic Map 2
"
Nouns" in the "noun + NO" structure a) mental activity KANASHIMI (sadness), FUKAI (displeasure), SHITASHIMI (familiarity), ZOUO (abhorrence), GAMAN (endurance), KOUKAI (regret), YOROKOBI (joy), MANZOKU (satisfaction), RAKUTAN (disappointment), IGAI (unexpected), ...and so on. b) nominalizations HOSHIN (self-defense), CHIKUZAI (moneymaking), INTAI (retirement), HIHAN (criticism), HIYAKU (rapid progress), ...and so on egory created by these adnominal constituents and their modified nouns "Feeling."
city) (of) (impression) GA TUYOI KAWASAKISHI WA... KAWASAKI city which gives a strong impression of an industrial city 4 b) Modified nouns which represent instances of the concrete nouns in compound nouns
mantic relation between "noun + NO" structures and their head nouns shown in their research.DENWA NO SECCHI
DENSHA NO TUUKIN
ASHITA NO DEITO
BILU NO MAE
KODOMO NO NAMAE
BAKUHATSU NO GEN'IN
KAISHI NO JIKOKU
HEYA NO BANGOU
KANOJO NO NOUTO
BENGOSHI NO SMITH SAN
installation of
the telephone
commuting by
train
a date for
tomorrow
in front of
the building
the name of
the child
the cause of
the explosion
the starting time
the number of
the room
her note
Mr. Smith,
the lawyer
NB: The English gloss of the "noun + NO" examples should be read from right to left.3 The Exploration of the Similarities
of Semantic Functions of "noun +
NO" Structures and Adjectives.
(The Method for this Research)
3.1 The Basic Concept
There is one case in which the meanings of ad-
nominal constituents are semantically similar
to the features of the referents of their head
nouns, e.g. adnominal constituents represent
the concrete contents of their head nouns. Let
us consider the Japanese phrase "KANASHII
KIMOCHI (sad feeling)" and "YOROKOBI NO
KIMOCHI (feeling of delight)" as examples.
KANASHII KIMOCHI
adjective
noun
(sad)
(feeling)
sad feeling
YOROKOBI
NO
KIMOCHI
noun
postp,
noun
(delight)
(of)
(feeling)
feeling of delight
Table 1 :
1List of adjectives and "noun + NO" Structures<mental state: emotion>
Adj:
KANASHII (sad), URESHII
(pleasurable)
noun+no: KANASHIMI (sadness),
YOROKOBI (delight)
<state/situation>
Adj:
ISOGASHII (busy),
MUTITUJONA (disorderly)
noun+no: KURAYAMI (darkness),
MUISHIKI (unconscious)
<aspect>
Adj:
YUUMOUNA (brave),
HIGEKITEKINA (tragic)
noun+no: KONTON (chaos), TAIHAI
(decadence)
<characteristic>
Adj:
NONKINA (carefree),
KISAKUNA (open-hearted)
noun+no: IJIPPARI (stubbornness),
GOUMANNA (arrogance)
<range/area>
Adj:
JOUSHIKITEKINA (comnmnsense),
KOUTEKINA (official)
noun+no: GAKUMON (studies), GYOUMU
(duty)
<viewpoint/standpoint>
Adj:
KYOUIKUTEKINA (educational),
SHOUGYOUTEKINA (economic)
noun+no: KYOUIKU (education), EISEI
(hygiene)
Table 2 :
2Some "noun + NO" constructions with "impression"1) nouns which refer to concrete objects
KOUGYOU TOSHI, HINOKI
(industry city)
(cypress)
2) nominalizations
SOKUBAKU,
KOUTEN
(restriction)
(improvement)
Table 3 :
3The modified nouns and adjectives,nominal adjectivals, and "noun + NO"
collected in the semantic category,
<mental state>
Modified nouns
KANJI (feeling), KAN (sensation),
OMOI (thought), KI (intention),
NEN (inclination), KIMOCHI (mind),
KIBUN (mood), KANJO (emotion),
JO (passion)
Adjectives and nominal adjectivals
AWARE_NA (poor), IJIRASHII (moving),
HOKORASHII (triumphant),
KINODOKU_NA (unfortunate),
SHIAWASE_NA (happy),
ZANNEN_NA (disappointing),
URESHII (pleasurable), ...and so on.
AcknowledgmentWe would like to thank Catherine Macleod of New York University and Kiyotaka Uchimoto of the Communications Research Laboratory for their invaluable help in writing this paper.Ordinary people have a strong impression of environmental pollution from the chemical company.The impression the children make is of a "HINOKI (HINOKI-tree)" and the impression the chemical company makes is of "KANKY-OUOSEN (environmental pollution)". These "noun + NO'structures represent the characteristics of children and a company in same manner that the adjective "mild" indicates his characteristic.In these examples, nouns in "noun + NO" represent objects and events and so on, i.e. "HINOKI-tree" and "environmental pollution" these nouns ordinarily do not behave like adjectives. That is, the adjective "mild" can represent a characteristic directly, however, these nouns in "noun + NO" cannot represent the characteristics of something directly. We cannot say "that children are HINOKI-tree" and "the company is the environmental pollution" while we can say "He is mild." That is, in this case, "noun + NO" cannot appear in the predicative position with this meaning. When we show the characteristics of something by using nouns that refer to concrete objects and events, we need to specify the modified nouns which indicate the characteristics like "impression, .... disposition" and so on.(2) "Noun + NO" can represent quantification.Some adjectives (:an also represent quantification. The rate of debt has reached a dangerous level for the household budget.The suggestion of the Japanese prime minister is at an "abstract" level on the "concreteabstract" scale and the rate of debt is at a "dangerous" level on the "safety-dangerous" scale. The level of concreteness and safety is represented by adjectives. On the other hand, the nouns that refer to concrete objects and verbal notions also represent a level by inference from the context. We can infer the scale from the contextual situation. For example, KOUNIN KOUHO WA UWASA NO DANKAI (rumor) (of) (stage) the stage of rumor DA GA BORUGA SHI Though it is completely at the stage of rumor, the candidate for the succession is Mr. Borgar ... SHUSHOU GAWA WA "" (the prime minister and his staff) ENZETU NO IKI (speech) (of) (level) WO KOERARENAKATTA. Though the prime minister and his staff said "we will specify the guidelines of the government proposal during the election", after all it was still at the level of speech.GIJUTUTEKINIWAKANSEI NO IKI (completeness) (of) (level) NI TASSHITEITA. It reached a level of completeness, technically.In the above case, we do not have a semantic element of actual "talk" in the "rumor" or "speech" meaning nor a semantic element "event" in the "completeness" meaning, but we have the level of "rumor" on the "truth-rumor" scale, the level of "speech" on the "statementspeech" scale and the level of "completeness" on the "incompleteness-completeness" scale. The nouns that refer to concrete objects and verbal actions are similar to adjectives when they represent a level in context.ConclusionIn this paper, we discussed the similarities and differences among adnominal constituents, i.e. adjectives and "noun + NO" structures which have a broad range of semantic functions. Nouns and adjectives differ in part of speech, but they sometimes have similarities when used adnominally. In such a case, we need not distinguish them from each other semantically. We investigated explicit criteria to detect similarities and differences between nouns and adjectives in adnominal usage. This research was verified by using large corpora and a self-organizing mapping system based on the neural network model. In future work, we will attempt to systematically classify words used adnominally according to the semantic behavior of adnominal constituents following the theoretical insights of Pustejovsky.
Mental State Adjectives: the Perspective of Generative Lexicon. P Bouillon, Proc. of COLING96. of COLING96P. Bouillon. 1996. Mental State Adjectives: the Perspective of Generative Lexicon. In Proc. of COLING96.
Corpus-Derived First, Second and Third-Order Word Affinities. G Grefenstette, Proc. off the EURALEX '9~. off the EURALEX '9~G. Grefenstette. 1994. Corpus-Derived First, Second and Third-Order Word Affinities. In ' Proc. off the EURALEX '9~.
Lexical Semantics to Disambiguate Polysemous Phenomena of Japanese Adnominal Constituents. H Isahara, K Kanzaki, Proc. of A CL99. of A CL99H. Isahara and K. Kanzaki. 1999. Lexical Se- mantics to Disambiguate Polysemous Phe- nomena of Japanese Adnominal Constituents. In Proc. of A CL99.
Construction of a Japanese Semantic Map using Self-Organizing Neural Network Model. Q Ma, K Kanzaki, M Murata, K Uchimoto, H Isahara, 6th Annual Meeting of the Association for Natural Language Processing. Japanwill appearQ. Ma, K. Kanzaki, M. Murata, K. Uchi- moto, and H. Isahara. 2000. Construction of a Japanese Semantic Map using Self- Organizing Neural Network Model. In 6th Annual Meeting of the Association for Nat- ural Language Processing, Japan. (will ap- pear).
The Generative Lexicon. J Pustejovsky, The MIT PressJ. Pustejovsky. 1995. The Generative Lexicon. The MIT Press.
A Generative Lexicon Perspective for Adjectival Modification. P Saint-Dizier, Proc. of the Conference volume2 in 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics(COLING-A CL '98). of the Conference volume2 in 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics(COLING-A CL '98)P. Saint-Dizier. 1998. A Generative Lex- icon Perspective for Adjectival Modifica- tion. In Proc. of the Conference volume2 in 36th Annual Meeting of the Associa- tion for Computational Linguistics and 17th International Conference on Computational Linguistics(COLING-A CL '98).
Analysis of semantic relations between nouns. A Shimazu, S Naito, H Nomura, A. Shimazu, S. Naito, and H. Nomura. 1986. Analysis of semantic relations between nouns connected by a Japanese particle "no". |
219,304,838 | [] | The Logic of Typed Feature Structures
Cambridge University PressCopyright Cambridge University Press1992
Bob Carpenter
Mellon University
Carnegie, CambridgeEngland
The Logic of Typed Feature Structures
Computational Linguistics
Cambridge TractsCambridge University Press1931992Reviewed by Fernando Pereira AT& T Bell Laboratories
Introduction
For those of us who belonged to the "Bay Area (Computational) Linguistics Community," the early eighties were a heady time. Local researchers working on linguistics, computational linguistics, and logic programming were investigating notions of category, type, feature, term, and partial specification that appeared to converge to a powerful new approach for describing (linguistic) objects and their relationships by monotonic accumulation of constraints between their features. The seed notions had almost independently arisen in generalized phrase structure grammar (GPSG) (Gazdar et al. 1985), lexical-functional grammar (LFG) (Bresnan and Kaplan 1982), functionalunification grammar (FUG) (Kay 1985), logic programming (Colmerauer 1978, Pereira andWarren 1980), and terminological reasoning systems (Ait-Kaci 1984). It took, however, a lot of experimental and theoretical work to identify precisely what the core notions were, how particular systems related to the core notions, and what were the most illuminating mathematical accounts of that core. The development of the unificationbased formalism PATR-II (Shieber 1984) was an early step toward the definition of the core, but its mathematical analysis, and the clarification of the connections between the various systems, are only now coming to a reasonable closure. The Logic of Typed Feature Structures is the first monograph that brings all the main theoretical ideas into one place where they can be related and compared in a unified setting. Carpenter's book touches most of the crucial questions of the developments during the decade, provides proofs for central results, and reaches right up to the edge of current research in the field. These contributions alone make it an indispensable compendium for the researcher or graduate student working on constraint-based grammatical formalisms, and they also make it a very useful reference work for researchers in object-oriented databases and logic programming.
Having discharged the main obligation of the reviewer of saying who should read the book under review and why, I will now survey each of the book's four parts while raising some more general questions impinging on the whole book as they arise from the discussion of each part.
Basics
From the beginning, Carpenter emphasizes the strong links between attribute-value formalisms in computational linguistics and in knowledge representation (KR). This is a welcome conceptual connection. Historically, however, the two strands developed fairly independently of each other. Ait-Kaci's dissertation (1984) arose from an attempt to define a computationally tractable core of inheritance and frame-based reasoning, but its relevance to the analysis of linguistic categories was not appreciated as early as it should have been. Interestingly, systemic and dependency grammars had the lead in bringing inheritance and featural classification notions together on the linguistic side, but their influence in the particulars of the linguistic formalisms under consideration was slight, if any. Inheritance reasoning played no direct role in LFG, GPSG, PATR-II, or logic grammars, and it came into play first as a lexicon organization discipline (Flickinger, Pollard, and Wasow 1985;Sheiber 1985), not as a central part of the formalism.
The organization of the first part of the book follows naturally from the emphasis on KR ideas. Types and inheritance are discussed first, followed by feature (attributevalue) structures and the relations and operations that they inherit from the underlying type system: subsumption and join (unification). The last introductory chapter addresses in detail the crucial move of Kasper and Rounds (1986) to clarify the meaning of feature structures by viewing them as models of appropriately chosen modal logics.
Feature Structures and Feature Logics
The fruitful connection between feature structures and feature logics is pursued throughout the book, with soundness and completeness results for the basic system and all the major extensions and variations considered later. If something is missing in that comprehensive development, it might be some effort to relate feature logics to modal logics, and feature structures to modal frames. I believe that the original Kasper-Rounds logic was to some extent inspired by modal logics of concurrency, in particular the modal Hennessy-Milner logic for CCS (Hennessy and Milner 1985). It has also been argued that the connection to modal logic is an important route for easier and more general proofs of the required normal form and completeness results (Blackburn 1991).
Representations versus Algorithms
The introductory part establishes the algebraic, denotational semantics orientation of the book, and throughout the book, the more computational aspects of feature logics receive little attention. In purely conjunctive feature logics such as those arising from PATR-II, there is a simple connection between formulas and models. An almost linear satisfiability procedure, based on the UNION-FIND algorithm (Aho, Hopcroft, and Ullman 1976, Ait-Kaci 1984, Jaffar 1984, can be used to build the unique most-general feature structure satisfying a formula. There is thus relatively little to say about computational complexity (but not about practical computation costs, as will be observed below when rational unification is discussed), and the algebraic approach is direct and instructive. When we move to more-expressive feature logics, however, the situation changes radically. There are no longer unique most-general models, the algebraic approach becomes more labored, and satisfiability becomes NP-hard or worse. Computational complexity results were a central part of the development of the more-expressive feature logics by Rounds, Kasper, Moshier (Kasper andRounds 1986, Moshier and and others, but they are barely mentioned in Carpenter's book. One feels that those results, being of a more traditional (finite) model-theoretic character, may have been left out because they do not fit the book's algebraic plan.
Feature Logic Intractability and Natural Language Processing
A more general point arises from the computational issues just discussed. The intractability of satisfiability for expressive feature logics might seem a serious roadblock in the practical application of those logics in grammatical description systems. Two escape routes, one pragmatic and the other more radical, suggest themselves. The pragmatic route involves trying to identify more tractable subcases that may cover most of the situations of interest in actual grammatical description. Such "optimistic" algorithms were already suggested in Kasper's dissertation (1987), and have been extensively investigated since (Maxwell and Kaplan 1989;Eisele and D6rre 1988).
The more radical route, which as far as I know has not been pursued vigorously, looks more closely at the search control aspects of language processing systems based on feature computations. It takes conjunctive description as the only one that can have global extent in a computation. Nonconjunctive aspects of a description are then ephemeral in that nonconjunctive connectives introduced in a derivation must be eliminated within a bounded number of steps by a committed choice operation based on some preference-based search control mechanism. Such a view can be seen as a mild generalization of the ideas of deterministic parsing (Marcus 1980), and also closely related to the flat-guard committed choice logic programming languages (Ueda 1987;Saraswat 1990). In both of those frameworks a single conjunctive constraint is constructed incrementally on the basis of local committed choices among alternatives. Search completeness is of course sacrificed, but the computational intractability arising from having to consider all the combinations of smaller alternative constraints into larger consistent constraints is bypassed. Finally, the radical route suggests a discipline of trying to replace as much as possible disjunctive or negative constraints by somewhat weaker kinds of underspecification that admit of purely conjunctive formulations. That program was already suggested in the context of deterministic parsing (d-theory) (Marcus, Hindle, and Fleck 1983), and more recently in the context of incremental monotonic semantic interpretation (Alshawi and Crouch 1992), and might also be profitably employed in the more abstract feature-logic setting.
Prerequisites
I am well aware of the difficulties in determining what should be taken as a reasonable common background for readers in an interdisciplinary topic. There is little enough commonality in the theoretical backgrounds of computer scientists trained in different schools, and the common background becomes even more difficult to find when one wants to reach also theoretical linguists and AI researchers. Still, the introductory chapters, including their historical portions, seem to assume more than is strictly necessary, and in fact sometimes seem to assume what is later explained in the text in careful detail. This kind of forward reference might confuse readers as to what the book's prerequisites are, and what they should know as a matter of course. This is especially the case with respect to concepts of domain theory such as complete partial orders, conditional completeness, and powerdomains, which the great majority of potential readers (even, I believe, many U.S.-trained theoretical computer scientists) will not be familiar with. The early mentions will thus be confusing to them, even though there is later in the book a good introduction to those prerequisites. Another instance is the repeated mentions to intensionality and extensionality before their careful discussion in Chapter 8. Those have simply too many (admittedly related) meanings for the reader who would most benefit from the book to grasp what they refer to in feature logics before the in-depth discussion.
2.5 Quibbles 2.5.1 Up or Down? Following much of the literature on feature structures, Carpenter adopts the domain-theoretic convention that places more-specific objects "higher up" in informational partial orders. This conflicts with the conventions of model theory and knowledge representation, and leads to occasionally distracting dissonances in notation, terminology, and figures--for instance, inheritance hierarchies with the most specific elements at the top.
2.5.2 Abstract Feature Structures and Path Congruences. The discussion of abstract feature structures raises a historical difficulty. While I do not dispute that the full theoretical investigation of feature structures modulo renaming is correctly attributed to Moshier, the idea of representing renaming classes by equivalence relations over paths seems an obvious variant of the representation of such classes as deductively closed sets of path equations in Pereira and Shieber's account (1984) of the semantics of PATR-II, which is further explored in Shieber's dissertation (1989).
Unification Tradeoffs.
The discussion of the tradeoffs between acyclic and rational term unification at various points in the book might be a bit misleading. The original Prolog used a weakened pointer-based version of Robinson's (1965) unification algorithm (conceivably attributable to Boyer and Moore) without the occurs check. Removing the occurs check, which blocks the binding of a variable to a term containing the variable, from Robinson's algorithm allows cyclic unifiers to be built. This not only changes the interpretation of unification, but is also a source of potential nontermination when cyclic unifiers are applied. Nevertheless, in the early development of Prolog the occurs check was seen as too costly to be involved in the basic computational step of a programming language, and few if any examples were known in which the lack caused problems for knowledgeable Prolog programmers. The development of linear acyclic unification algorithms such as Paterson and Wegman's (1978) or Martelli and Montanari's (1982) did not change that assessment. Those algorithms require far heavier data structures and constant factors than Prolog's unification, they do not interface well with Prolog's backtracking control regime, and, most importantly, they are linear on the sum of the sizes of the terms involved. In contrast, for most practical purposes, Prolog's algorithm is linear on the size of the smaller term involved, which depends only on program size and not on the length of the computation. This was crucial for the acceptance of Prolog as a programming language, since it was felt that the cost of a procedure call in a reasonable programming language should not depend on the sizes of the actual parameters. In Prolog II, Colmerauer and his colleagues side-stepped the main weakness of Prolog's unification, nontermination, by moving to rational term unification, which also has added representational value for certain applications (although for other applications, particularly those derived from theorem proving, only acyclic unification makes sense). The best rational term unification algorithms are almost linear in all cases, and may be linear on the size of the smaller term in the same cases as Prolog's algorithm. However, the data structure complexity and constant factors are still higher than in Prolog's algorithm, and the interaction with backtracking is less straightforward.
Extensions
The second part of the book concerns extensions and specializations: acyclic feature structures, type constraints, inequations, extensionality, and groundedness. I found most interesting in this part the very thorough accounts of type constraints and of inequations. With type constraints restricting what features are appropriate for a type (so, for instance, an agreement feature is only appropriate for types of phrases that are subject to agreement constraints, and must yield a value of appropriate agreement type), we move decisively beyond what was provided by all earlier formalisms with the exception of GPSG (which was limited in other ways). Type constraints support good engineering practice in writing large systems such as wide-coverage grammars. Furthermore, in certain cases type information can lead to more efficient implementation. If the set of features for each type can be determined at compile time, the normal open-ended attribute-value representation of features can be replaced by the kind of positional representation used for record structures in programming languages such as C.
Carpenter starts the discussion of inequations from the Prolog II inequation (dif) mechanism (Colmerauer 1986), and extends it elegantly to feature logics. The simplicity of the account shows that the earlier exposition was carefully orchestrated to allow extensions and alterations of the core framework without major upheavals.
Extensionality
I was somewhat less happy about the chapters on extensionality and groundedness. That material seems less definitive, and indeed various points of the discussion are confusing or unclearly targeted.
There are conceptual and formal reasons for taking seriously the extensionality question. Different researchers in the field started with different intuitions of feature structures, with different identity conditions. GPSG categories, for example, were seen purely extensionally as labeled trees. As the area developed, mismatches between pointer-based implementations of feature structures and conceptual choices, and failures of completeness for various feature logics, pushed for increasing intensionalization. However, Carpenter goes directly into technical aspects of extensionality without much attention to the examples and intuitions that brought the question forward in the first place. It would have been better if alternative feature-structure models, for instance various tree and domain-theoretic models, had been compared with respect to their computational and logical implications, even if they were to be ultimately discarded in favor of the now standard DFA models. As it is, the reader must turn elsewhere, for instance Shieber's (1985) monograph, for a broader comparative analysis of feature models.
As a minor problem related to the above, the discussion of feature structures as a solution for a (domain) equation over partial functions seems unclear as to whether that is the most intensional model or the most extensional one (which would seem to be the case).
The relationship between extensionality and Prolog II unification is hinted at repeatedly, but its computational implications are not discussed. The differences in extensionality of feature structures and Prolog II terms are directly reflected in the differences between the corresponding unification algorithms. Feature-structure unification requires the identification of all corresponding feature-structure nodes, while term unification (leaving aside issues of computational complexity and termination in certain algorithms) only needs to install pointers from leaf (variable) nodes to corresponding nodes (Jaffar 1984).
Other algorithmic connections are not noted either, such as that between feature structure collapsing and DFA minimization. Finally, issues of extensionality and individuation may be most important for object-oriented databases, but that application is not discussed.
Alternatives
The third part of the book, named "Alternatives," is really an introduction to technical tools needed in later applications. Variables and assignments add nothing to the power of previous systems, but are convenient when discussing grammars and in another form were historically important in Ait-Kaci's system. Feature algebras, on the other hand, simplify and generalize radically certain mathematical arguments about feature structures. In fact, they might have been introduced sooner in the text to improve conceptual unity and eliminate some repetition in proofs.
Domain Theory
The last chapter of "Alternatives" discusses infinite feature structures and their formalization through domains. While the topic is potentially important for rounding out the theory of feature structures and the sketch of domain theory is for the most part on target, one wonders again whether the uninitiated reader will not stumble on references to notions that are discussed only later or not at all. For instance, compactness is mentioned informally before its definition, without suitable intuitions being provided. Scott's information systems are mentioned, without definition, although they are quite relevant to the material at hand, particularly abstract feature structures. And some of the proofs are too sketchy for a reader who presumably is not yet familiar with typical argument patterns in domain theory. The chapter concludes with the suggestive comment that a formalization of feature structures in terms of abstract closure operators on domains would eliminate the repetitiveness of completeness proofs for feature logics. One wishes the suggestion had been tested in the book, although one might also wonder whether the full apparatus of domain theory would be needed to take advantage of the convenience of closure operators. After all, closure operators arise naturally in logic from the notions of deductive closure and of logical consequence (Tarski 1983), so one might imagine that the simpler proofs could be carried out in a model-theoretic setting short of domain theory.
Applications
The last part of the book applies the theory developed earlier in three important areas: the semantics of unification-based phrase structure formalisms, the semantics of feature-based definite clause programs, and the specification of recursive type constraints.
Semantics of Grammar Formalisms
Carpenter's account of the denotational semantics of unification-based phrase structure grammars benefits greatly from the extensive use of feature algebras and featurealgebra morphisms to connect derivation steps. Earlier treatments were much less perspicuous, because they were based on complex encodings of phrase-structure rules as feature structures and of derivation steps as formal manipulations on rule encodings (Pereira and Shieber 1984;Shieber 1984;Rounds and Manaster-Ramer 1987).
As a minor terminological point, the qualifier unification-based used here is somewhat unfortunate, because unification is just a particular constraint-solving algorithm applicable for certain kinds of constraint-based grammars. The term constraint-based grammar is both less biased and more appropriate to modern formalisms in which unification is only one of several constraint-solving methods. Historically, neither LFG nor GPSG were originally thought of in terms of unification. GPSG features and feature constraints were seen as abbreviatory conventions for large collections of context-free terminal categories (Gazdar 1982). LFG F-structures were seen as the result of a congruence-closure equation-solving process after a sentence was fully analyzed into constituents (C-structures; Bresnan and Kaplan 1982). Even the term unification in functional unification grammar was chosen by Martin Kay as intuitively suggestive, and not by analogy with Robinson's notion of unification.
Constraint-based grammar formalisms would not have gained the attention they did if they did not have practical parsing and generation algorithms. As is the case for programming languages, the impetus for giving a sound denotational semantics to those formalisms arose in part from the need to prove the correctness of particular implementation methods. However, Carpenter concentrates only in giving the denotational semantics for a typical formalism, and does not show its correspondence to its operational realization. Proofs of equivalence between denotational and operational semantics are useful not only as examples of what needs to be done to show the correctness of a parsing or generation algorithm, but also for the insights they give on the connections between the semantics of constraint-based formalisms, of logic programs, and of traditional formal language representations. The reader interested in those aspects will have to turn elsewhere, especially again to Shieber's monograph (1985).
Logic Programs and Recursive Types
Carpenter's semantics of constraint-based grammars extends straightforwardly to the form of definite-clause programming in A~t-Kaci and Nasr's (1986) LOGIN language, although one might have hoped for a bit more information on the connection to constraint logic programming.
The formalization of recursive type constraints, which were first introduced in Ait-Kaci's dissertation, is more challenging. Carpenter clarifies and completes Ait-Kaci's work, and relates it nicely to the computational interpretation of the constraint-based grammatical formalism, HPSG (Pollard and Sag 1987).
Details
The book is remarkably free of editorial errors, which can be particularly confusing but difficult to catch in a mathematical text. Here are a few problems that seem to have slipped through and could confuse the reader momentarily. The agr type seems to be missing in the Conc set (13) for the example of Figure 2.11. In Definition 4.2, and in a few other places, the convention that x = y is intended to mean x and y are both defined and equal seems to be used without comment, but in other places the definedness is explicitly stated. On page 130, first sentence, the reference must be to "Prolog II and Prolog III", not to "Prolog II and Prolog II." On page 170, paragraph before Lemma 12.6, the first sentence should read "Note that even for countably based domains, there may be an uncountably infinite number of domain objects." The term "most-general morphism" used in definition 13.14 was not defined anywhere that I could find, although there is some mention of pointwise ordering of morphisms (but are there lubs in the order?). There seems to be something wrong in Definition 15.13. I believe G@~r should be G~, where G~ is the result of resolving F along path ~r. Finally, the initial point in the discussion of fan-out resolution in the limit on page 244 should be F0, not Fi.
Conclusion
I believe that The Logic of Typed Feature Structures is essential for any practicing or prospective researcher on feature-based grammar or knowledge representation formalisms and also very useful to researchers or graduate students in the grammar formalisms area of computational linguistics. Nowhere else can one find all the main mathematical analysis tools related to each other and all the central results carefully proved. Many readers, however, will need to come equipped with the support of a careful instructor or an attentive reading of a good introduction to the mathematical theory of partial orders, for instance, Davey and Priestley's (1990) Introduction to Lattices and Order. And those readers interested in the complexity of decision procedures for feature logics or in implementing systems based on them will have to look elsewhere for detailed algorithmic descriptions and complexity analyses of operations on feature structures and formulas. Carpenter's book is more in the European tradition that emphasizes algebraic models for formalisms than in the American tradition of complexity analyses for deductive procedures. Both are important. The Logic of Typed Feature Structures is the first systematic mapping of the landscape of feature logics, but many of the underlying processes and mechanisms still await an equally adept analysis.
AcknowledgmentsDavid Israel made several useful suggestions on content and form, and Daniel Pereira helped simplify and clarify the prose. All remaining errors, obscurities, and biases are, of course, my own.
The Design and Analysis of Computer Algorithms. A V Aho, J E Hopcroft, J D Ullman, Addison-WesleyAho, A. V.; Hopcroft, J. E.; and Ullman, J. D. (1976). The Design and Analysis of Computer Algorithms. Addison-Wesley.
A lattice theoretic approach to computation based on a calculus of partially ordered type structures. Doctoral dissertation. Ait-Kaci, P A Hasan ; Philadelphia, Hasan Ait-Kaci, R Nasr, Logic Programming. 33University of PennsylvaniaLOGIN: A logic programming language with built-in inheritanceAit-Kaci, Hasan (1984). A lattice theoretic approach to computation based on a calculus of partially ordered type structures. Doctoral dissertation, University of Pennsylvania, Philadelphia, PA. Ait-Kaci, Hasan, and Nasr, R. (1986). "LOGIN: A logic programming language with built-in inheritance." Logic Programming, 3(3), 185-217.
Monotonic semantic interpretation. Hiyan Alshawi, R Crouch, Proceedings, 30th Annual Meeting of the Association for Computational Linguistics. 30th Annual Meeting of the Association for Computational LinguisticsNewark DEAlshawi, Hiyan, and Crouch, R. (1992). "Monotonic semantic interpretation." In Proceedings, 30th Annual Meeting of the Association for Computational Linguistics. Newark DE, 32-39.
Dutch Network for Language, Logic and Information. P Blackburn, Colloquium on Modal Logic. M. de RijkeModal logic and attribute value structuresBlackburn, P. (1991). "Modal logic and attribute value structures." In Colloquium on Modal Logic, edited by M. de Rijke. Dutch Network for Language, Logic and Information.
Lexical-functional grammar: A formal system for grammatical representation. Joan Bresnan, Ronald Kaplan, The Mental Representation of Grammatical Relations. Joan BresnanBresnan, Joan, and Kaplan, Ronald (1982). "Lexical-functional grammar: A formal system for grammatical representation." In The Mental Representation of Grammatical Relations, edited by Joan Bresnan, 173-281.
Metamorphosis grammars. Alain Colmerauer, Natural Language Communication with Computers. L. Bolc. Springer-Verlag.Universit6 de Marseille IIFirst appeared as "Les grammaires de metamorphose," groupe d'intelligence artificielleColmerauer, Alain (1978). "Metamorphosis grammars." In Natural Language Communication with Computers, edited by L. Bolc. Springer-Verlag. (First appeared as "Les grammaires de metamorphose," groupe d'intelligence artificielle, Universit6 de Marseille II, November 1975.)
Theoretical model of Prolog II. Alain Colmerauer, Logic Programming and its Applications. M. van Caneghen and David H. D. WaneAblexColmerauer, Alain (1986). "Theoretical model of Prolog II." In Logic Programming and its Applications, edited by M. van Caneghen and David H. D. Wane. Ablex Series in Artificial Intelligence, 3-31. Ablex.
Introduction to Lattices and Order. B A Davey, H A Priestley, Cambridge University PressDavey, B. A., and Priestley, H. A. (1990). Introduction to Lattices and Order. Cambridge University Press.
Unification of disjunctive feature descriptions. A Eisele, J D6rre, Proceedings, 26th Annual Meeting of the Association for Computational Linguistics. 26th Annual Meeting of the Association for Computational LinguisticsEisele, A., and D6rre, J. (1988). "Unification of disjunctive feature descriptions." In Proceedings, 26th Annual Meeting of the Association for Computational Linguistics. Buffalo NY, 286-294.
Structure-sharing in lexical representation. Dan ; Flickinger, Carl ; Pollard, Thomas Wasow, Proceedings. null23Flickinger, Dan; Pollard, Carl; and Wasow, Thomas (1985). "Structure-sharing in lexical representation." In Proceedings, 23rd
Annual Meeting of the Association for Computational Linguistics. Chicago ILAnnual Meeting of the Association for Computational Linguistics. Chicago IL, 262-267.
Phrase structure grammar. Gerald Gazdar, The Nature of Syntactic Representation. Pauline Jacobson and Geoffrey K. PullumD. ReidelGazdar, Gerald (1982). "Phrase structure grammar." In The Nature of Syntactic Representation, edited by Pauline Jacobson and Geoffrey K. Pullum, 131-186. D. Reidel.
. Gerald ; Gazdar, Ewan ; Klein, Geoffrey K Pullum, Ivan Sag, Gazdar, Gerald; Klein, Ewan; Pullum, Geoffrey K.; and Sag, Ivan (1985).
Generalized Phrase Structure Grammar. Generalized Phrase Structure Grammar.
Algebraic laws for nondeterminism and concurrency. M Hennessy, R Milner, Journal of the Association for Computing Machinery. 321Hennessy, M., and Milner, R. (1985). "Algebraic laws for nondeterminism and concurrency." Journal of the Association for Computing Machinery, 32(1), 137-161.
Efficient unification over infinite terms. J Jaffar, New Generation Computing. 23Jaffar, J. (1984). "Efficient unification over infinite terms." New Generation Computing, 2(3), 207-219.
Feature structures: A logical theory with application to language analysis. Doctoral dissertation. Robert T Kasper, Ann Arbor, MichiganUniversity of MichiganKasper, Robert T. (1987). Feature structures: A logical theory with application to language analysis. Doctoral dissertation, University of Michigan, Ann Arbor, Michigan.
A logical semantics for feature structures. Robert T Kasper, William C Rounds, Proceedings, 24th Annual Meeting of the Association for Computational Linguistics. 24th Annual Meeting of the Association for Computational LinguisticsNew YorkKasper, Robert T., and Rounds, William C. (1986) "A logical semantics for feature structures." In Proceedings, 24th Annual Meeting of the Association for Computational Linguistics. New York, 257-266.
Parsing in functional unification grammar. Martin Kay, Natural Language Parsing. David R. Dowty, Lauri Karttunen, and Arnold M. ZwickyCambridge University PressKay, Martin (1985). "Parsing in functional unification grammar." In Natural Language Parsing, edited by David R. Dowty, Lauri Karttunen, and Arnold M. Zwicky, 251-278. Cambridge University Press.
A Theory of Syntactic Recognition for Natural Language. Mitchell P Marcus, Marcus, Mitchell P. (1980). A Theory of Syntactic Recognition for Natural Language.
D-theory: Talking about talking about trees. Mitchell P Marcus, Donald ; Hindle, Margaret Fleck, Proceedings, 21st Annual Meeting of the Association for Computational Linguistics. 21st Annual Meeting of the Association for Computational LinguisticsCambridge MAMarcus, Mitchell P.; Hindle, Donald; and Fleck, Margaret (1983). "D-theory: Talking about talking about trees." In Proceedings, 21st Annual Meeting of the Association for Computational Linguistics. Cambridge MA.
An efficient unification algorithm. A Martelli, U Montanari, ACM Transactions on Programming Languages and Systems. 42Martelli, A., and Montanari, U. (1982). "An efficient unification algorithm." ACM Transactions on Programming Languages and Systems, 4(2), 258-282.
An overview of disjunctive constraint satisfaction. J T Maxwell, Iii, Ronald M Kaplan, Proceedings, First International Workshop on Parsing Technology. Masaru Tomita. Pittsburgh PAFirst International Workshop on Parsing TechnologyMaxwell, J. T. III, and Kaplan, Ronald M. (1989). "An overview of disjunctive constraint satisfaction." In Proceedings, First International Workshop on Parsing Technology, edited by Masaru Tomita. Pittsburgh PA.
A logic for partially specified data structures. M D Moshier, William C Rounds, ACM Symposium on the Principles of Programming Languages. Moshier, M. D., and Rounds, William C. (1987). "A logic for partially specified data structures." In ACM Symposium on the Principles of Programming Languages.
. Germany Munich, Munich, Germany.
Linear unification. M S Paterson, M N Wegman, Journal of Computer and Systems Sciences. 162Paterson, M. S., and Wegman, M. N. (1978). "Linear unification." Journal of Computer and Systems Sciences, 16(2), 158-167.
The semantics of grammar formalisms seen as computer languages. Fernando C Pereira, Shieber, M Stuart, Proceedings of 1984 International Computational Linguistics Conference. 1984 International Computational Linguistics ConferencePereira, Fernando C., and Shieber, Stuart M. (1984). "The semantics of grammar formalisms seen as computer languages." In Proceedings of 1984 International Computational Linguistics Conference.
. C A Stanford, Stanford CA, 123-129.
Definite clause grammars for language analysis--A survey of the formalism and a comparison with augmented transition networks. Fernando C Pereira, Warren, H D David, Artificial Intelligence. 13Pereira, Fernando C., and Warren, David H. D. (1980). "Definite clause grammars for language analysis--A survey of the formalism and a comparison with augmented transition networks." Artificial Intelligence, 13, 231-278.
. Carl Pollard, Ivan Sag, Pollard, Carl, and Sag, Ivan (1987).
Fundamentals, Lecture notes 13. Center for the Study of Language and Information. Stanford CAIInformation-Based Syntax and Semantics Volume I: Fundamentals, Lecture notes 13. Center for the Study of Language and Information, Stanford CA.
A machine-oriented logic based on the resolution principle. J Robinson, Journal of the Association for Computational Machinery. 121Robinson, J. (1965). "A machine-oriented logic based on the resolution principle." Journal of the Association for Computational Machinery, 12(1), 23--44.
A logical version of functional grammar. William C Rounds, Manaster-Ramer, Proceedings. nullAlexis25Rounds, William C., and Manaster-Ramer, Alexis (1987). "A logical version of functional grammar." In Proceedings, 25th
Annual Meeting of the Association for Computational Linguistics. Stanford CAAnnual Meeting of the Association for Computational Linguistics. Stanford CA, 89-96.
JANUS: A step towards distributed constraint programming. V A Saraswat, Logic Programming: Proceedings of the 1990 North American Conference. S. Debray and M. HermenegildoThe MIT PressSaraswat, V. A. (1990). "JANUS: A step towards distributed constraint programming." In Logic Programming: Proceedings of the 1990 North American Conference, edited by S. Debray and M. Hermenegildo, 431-446. The MIT Press.
The design of a computer language for linguistic information. Stuart M Shieber, Proceedings of 1984 International Computational Linguistics Conference. 1984 International Computational Linguistics ConferenceStanford CAShieber, Stuart M. (1984). "The design of a computer language for linguistic information." In Proceedings of 1984 International Computational Linguistics Conference. Stanford CA, 362-366.
An Introduction to Unification-Based Approaches to Grammar. Stuart M Shieber, Stanford, CALecture notes 4. Center for the Study of Language and InformationShieber, Stuart M. (1985). An Introduction to Unification-Based Approaches to Grammar, Lecture notes 4. Center for the Study of Language and Information, Stanford, CA.
Parsing and type inference for natural and computer languages. Doctoral dissertation. Stuart M Shieber, Stanford CADepartment of Computer Science, Stanford UniversityShieber, Stuart M. (1989). Parsing and type inference for natural and computer languages. Doctoral dissertation, Department of Computer Science, Stanford University, Stanford CA.
Constraint-Based Grammar Formalisms. Stuart M Shieber, The MIT PressShieber, Stuart M. (1992). Constraint-Based Grammar Formalisms. The MIT Press.
A Tarski, Logic, Semantics, Metamathematics. Hackett Publishing CompanySecond editionTarski, A. (1983). Logic, Semantics, Metamathematics, Second edition. Hackett Publishing Company.
Guarded Horn clauses. K Ueda, Concurrent Prolog: Collected papers. Ehud ShapiroThe MIT PressUeda, K. (1987). "Guarded Horn clauses." In Concurrent Prolog: Collected papers, edited by Ehud Shapiro, 140-156. The MIT Press.
Fernando Pereira is president of the Association for Computational Linguistics. Pereira's address is: AT&T Bell Laboratories. 600 Mountain Avenue, PO Box 636, Murray Hill, NJ 07974-0636e-mail: pereira@research.att.comFernando Pereira is president of the Association for Computational Linguistics. Pereira's address is: AT&T Bell Laboratories, 2D-447, 600 Mountain Avenue, PO Box 636, Murray Hill, NJ 07974- 0636; e-mail: pereira@research.att.com. |
||
174,800,545 | Word-Node2Vec: Improving Word Embedding with Document-Level Non-Local Word Co-occurrences | Standard word embedding algorithms, such as word2vec and Glove, make a restrictive assumption that words are likely to be semantically related only if they co-occur locally within a window of fixed size. However, this restrictive assumption may not capture the semantic association between words that co-occur frequently but non-locally within documents. To alleviate this restriction, in this paper, we propose a graph-based word embedding method, named 'word-node2vec'. By relaxing the strong constraint of locality, our method is able to capture both local and non-local co-occurrences. Word-node2vec constructs a weighted graph, where each node represents a word and the weight of an edge between two nodes represents a combination of both local (e.g. word2vec) and document-level co-occurrences. Our experiments show that word-node2vec outperforms word2vec and glove on a range of different tasks, such as word-pair similarity prediction, word analogy and concept categorization. | [
3626819,
7478738,
51838647,
1957433,
5278106
] | Word-Node2Vec: Improving Word Embedding with Document-Level Non-Local Word Co-occurrences
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2 -June 7, 2019. 2019
Procheta Sen procheta.sen2@mail.dcu.ie
ADAPT Centre
School of Computing
Dublin City University
DublinIreland
Debasis Ganguly
IBM Research
DublinIreland
Gareth J F Jones
ADAPT Centre
School of Computing
Dublin City University
DublinIreland
Gareth Jones@dcu Ie
Word-Node2Vec: Improving Word Embedding with Document-Level Non-Local Word Co-occurrences
Proceedings of NAACL-HLT 2019
NAACL-HLT 2019Minneapolis, MinnesotaAssociation for Computational LinguisticsJune 2 -June 7, 2019. 20191041
Standard word embedding algorithms, such as word2vec and Glove, make a restrictive assumption that words are likely to be semantically related only if they co-occur locally within a window of fixed size. However, this restrictive assumption may not capture the semantic association between words that co-occur frequently but non-locally within documents. To alleviate this restriction, in this paper, we propose a graph-based word embedding method, named 'word-node2vec'. By relaxing the strong constraint of locality, our method is able to capture both local and non-local co-occurrences. Word-node2vec constructs a weighted graph, where each node represents a word and the weight of an edge between two nodes represents a combination of both local (e.g. word2vec) and document-level co-occurrences. Our experiments show that word-node2vec outperforms word2vec and glove on a range of different tasks, such as word-pair similarity prediction, word analogy and concept categorization.
Introduction
Word embedding, the process of obtaining vector representations of words, is a first step towards addressing language semantics, in which discrete entities, such as words, are embedded as vectors over a continuous space of reals. This not only facilitates to obtain semantic similarities between words to improve tasks such as semantic search (Ganguly et al., 2015;Roy et al., 2016), but is also useful in a number of down-stream NLP tasks including concept categorization (Jastrzebski et al., 2017), information retrieval (Guo et al., 2016), sentence similarity prediction (Mueller and Thyagarajan, 2016), sentiment analysis (Faruqui et al., 2015) and POS tagging (Tsvetkov et al., 2016) etc.
Word embedding approaches such as word2vec (Mikolov et al., 2013a) and Glove (Pennington et al., 2014) rely on a large corpus to learn the association between words. The architecture of existing word embedding approaches mimics the process of human cognition of word association by learning the representation of each word with an objective of maximizing the likelihood of predicting the words around its local context (defined by a fixed length word window). A limitation of existing word embedding approaches, such as word2vec and glove, is that they use a strong constraint that words are likely to be semantically related to each other only if one occurs within a local context of the another, where the local context is given by a word window of specified length.
On the other hand, non-local or document-level co-occurrences between words have been widely used to estimate semantic similarities between words. More specifically, the latent semantic analysis (LSA) method proposed by Deerwester et al. (1990) uses a spectral analysis (method of principal component analysis) of the term-document matrix of a collection to obtain the most informative concepts (word classes), and then expresses each document as a linear combination of these principal components. Blei et al. (2003) estimate a generative model from a given collection by assuming that documents are mixtures of a preset number of topics, where each topic represents a word distribution over the vocabulary. This is largely similar to decomposing a term-document matrix as a product of matrices with non-negative components, a process commonly known as non-negative matrix factorization (NMF) (Gaussier and Goutte, 2005). The underlying common idea among all these approaches is to make use of the frequent document-level word cooccurrences to identify likely semantic association between words.
Despite the presence of a vast volume of lit-erature on document-level (non-local) word cooccurrences, word embedding approaches do not utilize this information to derive the word representations. In this paper, we propose to augment the document-level non-local word co-occurrence information with the local co-occurrence information that methods such as word2vec and glove use. More specifically, we propose a graph-based word embedding method, named word-node2vec, that by relaxing the strong constraint of locality, is able to capture both the local and non-local co-occurrences. To represent the local dependencies, each node, representative of a word (hence the name 'word-node'), is initialized with a vector representation obtained with a standard method, e.g. word2vec. We then define the weight of the edge between a pair of word-nodes to reflect their likelihood of non-local co-occurrence, computed with the help of the global term-document matrix for the whole collection. The rest of the paper is organized as follows. In Section 2, we survey existing literature on word embedding. In Section 3, we revisit the skip-gram approach and propose a graph-based view of the skip-gram objective as a pre-cursor to developing our model. In Section 4, we extend the skip-gram graph model with non-local document-level cooccurrence information. Section 5 describes our experimental setup. Section 6 reports the results of our new embedding approach against a number of baselines. Finally, Section 7 concludes the paper with directions for future work.
Related Work
The word2vec (Mikolov et al., 2013a) embedding model shifts a window of a predefined size (a parameter) across the text of a collection of documents in order to train a linear classifier for each word to predict itself given its context (continuous bag-of-words), or its context given the word (skip-gram). The parameter vector transforming a word to its context (or vice-versa) gives its embedded representation. In addition to making use of the words in the context as positive samples, word2vec also relies on the use of words randomly sampled from the collection (outside the current context) as negative examples. Levy and Goldberg (2014) showed that the negative sampling based skip-gram (SGNS) objective function of word2vec is mathematically equivalent to factorizing a positive point-wise mutual information gain (PPMI) matrix shifted by log(k), where k is the number of negative samples.
The key idea behind the glove algorithm proposed in (Pennington et al., 2014) is to make use of the ratio of the co-occurrence probabilities between word pairs to better distinguish semantically related words from non-related ones. The study ultimately shows that factorizing the log of the co-occurrence matrix leads to effective embedded representation of words. The co-occurrences in both word2vec and glove are essentially local in nature. In contrast, our proposed algorithm leverages both local and non-local co-occurrences.
More recently, Peters et al. (2018) proposed ELMO, a deep contextualized word representation with layers of stacked bi-directional LSTMs to model both a) complex characteristics of word use (e.g., syntax and semantics), and b) their diversity across various linguistic contexts. A limitation of ELMO is that a word representation may effectively be learned mainly in the presence of an associated context, as a result of which the method is likely to find applications mostly in downstream tasks, e.g. question answering and sentiment analysis. However, in contrast, our proposed method can learn the representation of a word in isolation, which means that, similar to word2vec and Glove, word vectors obtained using our method can be applied directly to (and is also likely to work well for) word similarity and word analogy tasks. We included ELMO as of our baseline approaches in our experiments. Grover and Leskovec (2016) proposed a skipgram based objective function to embed each node of a graph. Analogous to skip-gram based word embedding, each node vector is given as input to a linear classifier to predict the context vector around a node. The context vector around a node, in this case, consists of a sequence of nodes visited by a random walk starting from that node. In our method, we use a similar graph-based construction to train vector representations of a node (each node a word). However, we use a stratified sampling approach within a maximum distance (hopcount) of 2, instead of allowing the random walk to proceed along in a combined depth-first and breadth-first manner, as in (Grover and Leskovec, 2016). Through our experiments, we find that larger hop-counts (i.e. longer transitive dependencies) introduce noise in the document-level word co-occurrence estimation process.
Generalized Word Embedding
In this section, we propose a general word embedding framework based on the skip-gram objective function of word2vec. Our proposed method relies on a general construction of the context around a word. We modify the skip-gram objective function of word2vec to take into account this general context of words. Before describing our proposed approach, we revisit the objective function of negative sampling based skip-gram word2vec (SGNS).
Skip-gram. In word2vec, the context of a word comprises words occurring within a window of a fixed size (say k) pivoted at a particular instance of w in the collection. More formally, let Λ(w) denote the set of indexes where the word w occurs in a collection C = {t 1 , . . . , t T }, T denoting the total number of tokens in the collection C, i.e.
Λ(w) = {i : t i = w}.
(1)
We then construct the context c(w) of a word as
c(w) = ∪ i∈Λ(w) ∪ k j=−k j =0 t i+j(2)
Let Ω denote the set of all observed word-context pairs (w, c(w)), i.e.
Ω + = ∪ w∈V {w, c(w)},(3)
where V denotes the vocabulary set, and Ω − denote the set of negative samples of word-context pairs, i.e.
Ω − = ∪ w∈V {w, ∪{v : v ∼ (V − c(w))}},(4)
where words v's in the negative context set are randomly sampled from the complement set c(w). Let y be an indicator random variable denoting semantic relatedness of a word with its context. For a word w and its context c(w) (as defined in Equation 2, the SGNS algorithm seeks to maximize the objective function
J(θ) = w,c(w)∈Ω + p(y = 1|w, c w )+ w,c(w)∈Ω − p(y = 0|w, c w )),(5)
where p(.) is the log-likelihood function, and θ ∈ R d×|V | represents the trainable matrix of parameters, each d dimensional column vector of the matrix θ denoting the vector representation of word w, i.e. w = θ w . Note that the vector for a set of context words c(w) is obtained by some aggregation function (sum or average) over the constituent words, i.e.
c(w) = u∈c(w) u.(6)
In order to optimize J(θ), the word2vec approach shifts a window of size k pivoted around a word w = t i (token positioned at offset i in the corpus), and applies stochastic gradient descent (SGD) to update the parameters for the corresponding word w and its context vector c(w).
A Graph Formulation of SGNS. We now propose a general framework that allows contexts to be defined in a more general way. The solution relies on defining a graph G = (V, E), where each node corresponds to a word from the vocabulary of the given collection, i.e.
V = {x w : w ∈ V }.(7)
In general, an edge (x u , x v ) ∈ E represents a relation between two words u and v of weight
w(x u , x v ) ∈ R.
For example, in order to define the context of SGNS (Equation 2), the edge set is defined as
E = {(x w , x u ) : u ∈ ∪ i∈Λ(w) ∪ k j=−k j =0 t i+j }. (8)
Learning the vector representations for each node of the graph G leads to learning the vector representation for each word, because there is a one-one mapping between the set of nodes V and the set of words V (henceforth we refer to a node of this general class of graphs, defined as per Equation 7, as a word-node). The objective of the embedding is to learn vector representations of nodes such that two nodes are close in the embedded space if, as per the edge relations of the graph, these nodes are within a κ-adjacency neighborhood of each other. The κ-adjacency neighborhood of a graph is the set
N κ (x w ) = {x u ∈ V : h(x w , x u ) ≤ κ},(9)
where h(u, v) denotes the hop-count or adjacency number between nodes u and v. In the general formulation, the set of N κ (x w ), constituting the set of nodes reachable from paths of length at most k starting at x w , act as positive examples to learn the embedding of node x w . This is because these positive examples seek to make the vector representation of x w similar to the vector representations of nodes in N κ (x w ). More formally,
Ω + = ∪ xw∈V {x w , N κ (x w )}, Ω − = ∪ xw∈V {x w , ∪{x u : u ∼ V − N κ (x w )}}.(10)
Instead of iterating over the words in a corpus, the SGNS equivalent is then achieved by iterating over the set of nodes and maximizing the same objective function of Equation 5 using the definitions of the positive and negative example sets from Equation 10. Note that to achieve the SGNS objective the value of κ is set to 1 in the definition of Ω + in Equation 10, i.e. the set of context for a word-node comprises one-hop neighbours as defined by the edge relations of Equation 8.
Extending the Graph Model for
Non-Local Co-occurrences
The graph based approach of Section 3 allows alternative ways to define the context and learn the objective function to obtain word-node representations. In this section, we describe how to augment the non-local document-level co-occurrence information in the graph-based framework.
Co-occurrence Weights. The first step to include non-local co-occurrences is to modify the edge relations of SGNS (Equation 8) to accommodate weighted document-level co-occurrences. Instead of considering the collection C = {t 1 , . . . , t T } as a stream of words, we consider C as a set of M documents {D i } M i=1 . First, we make provision to include weighted edges of the form (x w , x u , ω(x w , x u )) in the edge construction process of Equation 8. The weight ω(x w , x u ) between word-nodes x w and x u is intended to represent a measure of association between these words.
Next, we describe how to compute the non-local co-occurrence weight between a pair of words. First, we compute the co-occurrence probability of two words w and u as
P (w, u) = M i=1 I(w, u, D i ) M i=1 I(w, D i ) M i=1 I(u, D i ) ,(11)
where the numerator denotes the total number of times that the words w and u co-occur in the collection of all documents, and the denominator denotes the number of times each occur independently. In our approach, we use a generalized form of Equation 11, where analogous to the Jelinek-Mercer smoothing method (Ponte and Croft, 1998), we take into account the informativeness of the co-occurrences by linearly combining the frequencies with the global statistics of inverse collection frequency. More specifically,
P α (w, u) = αP (w, u) + (1 − α)T 2 |Λ(w)||Λ(u)| ,(12)
where P (w, u) represents the maximum likelihood estimate computed by Equation 11 and the denominator denotes the product of the collection frequencies of the terms (as per the notation of Equation 1). It can be seen that Equation 12 allows relative weighting of the term frequency and the informativeness components.
Combination with Local Co-occurrences. The next step in our word-node2vec method is to augment the non-local co-occurrence information computed as per Equation 12 with the local cooccurrence of SGNS as defined in Equation 8. For this, analogous to (Pennington et al., 2014), we compute the probability of co-occurrence between a word pair restricted within a window of size k over the whole collection. More formally,
P k (w, u) = 1 |Λ(w)| i∈Λ(w) I(t i+j = u) k j=−k (13)
Next, we assign weight to an edge by combining the local and non-local co-occurrence probabilities estimated from Equations 13 and 12 respectively. Formally speaking,
ω(x w , x u ) = P α (w, u)P k (w, u).(14)
Context with Weighted Edges. Constructing the context of a node x w (Section 3), requires a modification aimed to take into account the edge weights while selecting the neighboring nodes of x w . Instead of defining the context as the entire set of κ-neighborhood N κ (x w ) of a node x w , we define a κ-neighbourhood of length (hop-count), l, which is a subset of l samples drawn from the overall neighbourhood. The likelihood of sampling a node x u from the neighbourhood set is proportional to the weight of the edge (x w , x u ), i.e., ω(x w , x u ). This way of defining the context allows the algorithm to make use of the edge weights (local and non-local cooccurrences) in learning the node representations, i.e. assigning more importance to associations with higher weights in seeking to embed the current word-node close to them.
Our idea, in general, is to use stratified sampling, where each stratum corresponds to a neighbourhood of particular length. The priors assigned to the strata in increasing sequence of adjacency length form a decreasing sequence, which means that the most emphasis is put on direct cooccurrence evidence (i.e. the 1-adjacent neighborhood), than to the 2-adjacent nodes and so on.
Stratified sampling requires the strata to be mutually disjoint of each other. This means that the κ-neighbourhood of Equation 9 needs to be redefined to ensure that any node belongs to exactly one of the partitions (defined by its hop-count). To state this formally, we define the set of nodes of (not up to) hop-count j as
H j (x w ) = ∪{x u : h(x w , x u ) = j}(15)
The κ-neighbourhood is then defined as
N κ (x w ) = ∪ κ j=1 (H j (x w ) − ∪ j−1 j =1 H j (x w )).(16)
A subset of size l, comprised of stratified samples from N κ (x w ), is then sampled with decreasing priors β 1 , . . . , β κ , i.e., β j < β j−1 ∀j = 2, . . . , κ and κ j=1 β j = 1. Putting things together, the probability of sampling a node from the set N κ (x w ) defined as per Equation 16 is then given by
P (x u |N κ (x w ))=β j P (x u |H j (x w ))=β j ω(x w , x u ) ω(x w , .) ,(17)
where ω(x w , x u ) are edge weights computed with Equation 14 and ω(x w , .) denotes the sum of edges emanating from node x w .
As a point of note, for our experiments, we obtained optimal results by using κ = 2. Consequently, to simplify the description of our experiments, we name the parameter β 1 as β (the parameter β 2 is then identical to 1 − β). We would also mention at this point that our proposed way of constructing the context by sampling neighboring nodes is different from the one proposed in (Grover and Leskovec, 2016), which uses a combination of breadth-first (BFS) and depth-first (DFS) traversals, with parameters p and q respectively. Our experiments reveal that our sampling strategy outperforms that of Grover and Leskovec (2016) (treated as a baseline).
Experimental Setup
In this section, we describe our experimental setup to evaluate our new word embedding method.
Dataset
A word embedding algorithm requires a collection to learn word representations. To compare the various word embedding approaches (i.e. our method and the baselines), we use the DBPedia (2014) corpus, which is a collection of abstracts of Wikipedia pages crawled in 2014 1 . Dataset characteristics are outlined in Table 1. As part of preprocessing, we removed words with collection frequency less than 10 and also removed stopwords 2 .
Baselines and Implementation
The objective of our experiments is two-fold. First, to show that a combination of local and global approaches is likely to yield effective embedded representations of word vectors, and second that our proposed graph-based formalism is likely to work better than a trivial black-box way of combining the two sources of information.
Local Co-occurrence approaches. As approaches that use local co-occurrence information, we use three state-of-the-art embedding approaches namely skip-gram word2vec with negative sampling (SGNS) (Mikolov et al., 2013a), Glove (Pennington et al., 2014) and Fasttext (Joulin et al., 2016). All these methods rely only on co-occurrences (at the level of words for the first two and at the level of character n-grams for the last one) within a word or character n-gram window of specified length k (acting as a parameter). Fasttext learns the vector representation of each word by aggregating (vector sum) the vector representations of its constituent n-grams.
Additionally, we also employ a more recent approach, namely ELMO (Peters et al., 2018), which relies on a pre-trained model (comprised of stacked bidirectional LSTMs) to infer vectors for a given context (typically a sequence of words). For our experiments, Document-level Co-occurrence approaches.
Although not an embedding approach, the LDA topic modeling algorithm outputs two matrices, namely θ ∈ R M ×d and φ ∈ R d×V , representing the document-topic and topic-words distribution respectively (Blei et al., 2003). LDA uses document-level word co-occurrences to estimate both these matrices. In principle, one can then use the φ matrix as a substitute for the word embedding parameter matrix of SGNS (see Equation 5). This gives d dimensional vectors for each word purely with a global co-occurrence based approach.
Although it is possible to choose other non-local co-occurrence approaches as baselines, e.g. PLSA (Hofmann, 1999) or LSA, (Deerwester et al., 1990), it was shown in (Blei et al., 2003) that LDA outperforms each of these. Consequently, we use the stronger baseline of LDA in our experiments.
Combination of Local and Non-local Cooccurrences. To empirically demonstrate the effectiveness of our proposed graph-based wordnode embedding, we employ an additional baseline that is a linear combination of the word vectors obtained individually with the local and nonlocal approaches. More formally, the vector of each word w is given as
w = λw Local + (1 − λ)w LDA ,(18)
where w Local is the vector representation of word w obtained by a local co-occurrence baseline, i.e. SGNS and Glove, whereas w LDA represents the vector for the word w obtained with LDA. Additionally, we employ the node2vec approach as a baseline. In particular, we use node2vec to learn the word-node representations of the graph constructed as per Section 4. The purpose of this baseline is to show that our way of defining the contexts around word-nodes is more suitable for our task of word embedding than a general-purpose graph node embedding approach.
Evaluation Tasks and Datasets
To compare the relative performance of word-node2vec with the baselines, we use a number of datasets, each corresponding to one of the following three evaluation tasks.
Word Similarity. A standard way to measure the effectiveness of embedded words is to measure how well the similarity between a pair of words correlates with human judgments. Two such standard datasets that we use for our experiments are the WSIM-353 (Finkelstein et al., 2014) and the MEN (Bruni et al., 2014) datasets. Both comprise a list of word pairs, with an associated human judged similarity value. This similarity value is expected to be high for semantically similar words, such as 'morning' and 'sunrise' (human assigned score of 49 out of 50), and low for semantically unrelated words, such as 'angel' and 'gasoline' (score of 1 out of 50), both examples being taken from the MEN dataset.
Word Analogy. The word analogy task consists of templates of the form "A:B as C:X", where A, B, and C are given words, whereas X is unknown. Using a vector representation of words this analogy task is solved by retrieving the vector most similar to that of B + C − A. A word embedding is considered effective if it finds a greater number of correct answers (resulting in higher accuracy).
We employed three different analogy datasets, namely, the Google Analogy (Mikolov et al., 2013a), the MSR Analogy (Mikolov et al., 2013b) and the SemEval-2012 task 2 (Jurgens et al., 2012) datasets. The MSR dataset contains syntactic questions only involving morphological variations. The Google dataset on the other hand contains both syntactic and semantic questions.
Given an analogy 'A:B as C:D', the Semeval-2012 task requires prediction of the degree to which the semantic relations between A and B are similar to those between C and D. In our experiments, we treat the given entity D as unknown and seek to predict D, similar to the MSR and Google analogy datasets. Table 2 provides an overview of examples from these datasets.
Concept Categorization Task. The concept categorization task requires classifying nouns into a concept type derived from an ontology. For this task, we employ the AP (Almuhareb and Poesio, 2005), BLESS (Baroni and Lenci, 2011) and ESSL 2b (Marco Baroni and Lenci, 2008) datasets. The AP dataset contains 402 nouns from 21 WordNet classes, e.g., nouns such as 'ceremony', 'feast', and 'graduation' belong to the class 'Social Occasion'. The BLESS dataset, designed for the evaluation of distributional semantic models, contains 200 distinct English concrete nouns as target concepts. These nouns are categorized into 17 broad classes.
Evaluation Metrics and Pipeline. The word similarity prediction effectiveness is measured with the help of Spearman's rank correlation coefficient ρ. This measures the rank correlation (higher is better) between the list of word pairs sorted in decreasing order of inter-similarity values as predicted by a word embedding algorithm and the reference list of human judged word pairs. For the analogy and the concept categorization tasks, we report the accuracy in predicting the reference word and that of the class, respectively.
Parameters and Settings. In our experiments, for all the methods, except ELMO, we set the number of dimensions to 200. To find optimal settings for each method (except ELMO), we use the MEN dataset as a development set for tuning the parameters of each method. Each method with the optimal parameter settings is then applied for the rest of the datasets and tasks.
Since we used a pre-trained model for ELMO, the number of dimensions corresponds to the size of the output layer of the network, the value of which in the default configuration of the Python implementation 3 is 1024.
The parameters of SGNS are window size (k) and the number of negative samples (NS). For the baseline approach SGNS, we varied k from 5 to 40 in steps of 5 and found that the best results are obtained when k = 10 and N S = 5. Similarly, for Glove we chose the optimal settings by varying k within the same range of [5,40] and found that the optimal ρ for the MEN dataset is obtained for k = 20. We obtain the LDA results by setting the number of topics to 200 (so as to match with the dimensionality). As LDA hyper-parameters, we use settings as prescribed in 3 https://github.com/allenai/allennlp/ blob/master/tutorials/how_to/elmo.md Table 3: Word similarity prediction results. (Griffiths and Steyvers, 2004), i.e., β = 0.1 and α = 0.25 (50/(#topics = 200)).
Since we found that SGNS performed significantly better than Glove, we use SGNS vectors for the linear combination method (Equation 1), which we call SGNS-LDA from hereon. The parameter λ was varied within a range of [0.1, 0.9] in steps of 0.1 (λ = 0 and λ = 1 degenerate to that of LDA and SGNS respectively). We found that the best results are obtained for λ = 0.9.
For node2vec baseline approach of word-node embedding, we varied the parameters p and q (BFS and DFS parameters) within a range of [0.1, 5] and found that the best results on the MEN dataset are given for p = 1 and q = 1 (Grover and Leskovec, 2016). Another parameter in node2vec is the random walk length, l, for which the optimal value was found to be 80.
For word-node2vec, in addition to window size (k) and number of negative samples (N S), three more parameters are: i) α, i.e., the importance of the presence of a term relative to its informativeness (Equation 12, ii) β, the prior assigned to sampling from the 1-adjacent neighborhood, and iii) the size of the context sampled from the neighborhood, l (this is analogous to the random walk length parameter of node2vec). Instead of separately optimizing the parameters common to SGNS, we directly use the optimal values of k = 10 and N S = 5 for word-node2vec. The optimal results of the additional parameters, tuned on the MEN dataset, are shown in Table 3.
Results
Word Similarity Prediction. Table 3 shows the results obtained by the competing methods on the word similarity prediction task. It can be seen that Glove turns out to be relatively ineffective in modeling the semantic representations of words as compared to human judgments. SGNS performs significantly better and the settings trained on MEN dataset generalize well on the WSIM-353 dataset as well. LDA performs rather poorly indicating that only global co-occurrences can lead to noisy representations of words. FastText performs worse as compared to SGNS. It is worth mentioning that the performance of ELMO is disappointing on this task of semantic similarity pre-diction, because of the most likely reason that it better learns vector representations of word in the presence of a context. A linear combination of SGNS and LDA (Equation 1 with λ = 0.9) does not perform better than SGNS, which means that a simple way of combining the embedded representations obtained individually with local and non-local approaches does not work well.
The node2vec approach of embedding nodes of the word-nodes graph constructed as per the description of Section 4 relies on a random walk based construction of the context of a word node. This random walk based context construction is only able to improve the SGNS results slightly, indicating that random walks can introduce noise in the contexts of word-nodes.
The word-node based graph construction (incorporating local and non-local co-occurrences in a principled way) works particularly well in conjunction with the stratified sampling based approach of selecting context words from the κneighborhood. The optimal value of α = 0.5 suggests that document-level co-occurrences should be computed by assigning equal importance to term presence and informativeness. A value of β = 0.7 confirms the hypothesis that more emphasis should be put on direct co-occurrences.
Word Analogy and Concept Categorization. Similar trends are observed in the word analogy and concept categorization tasks in Tables 4 and 5 respectively. Relatively higher improve- ments with word-node2vec are noted for the MSR analogy task (comprised of syntactic categories). Among the baseline approaches, both node2vec and SGNS-LDA work well on the concept categorization task. However, the performance improvements are inconsistent across datasets, e.g. SGNS-LDA performs well on ESSLI 2b and poorly on AP. Our proposed method configured on the MEN dataset works consistently well across all datasets, which indicates that word-node2vec can generalize well for different tasks.
As a side observation, we note that ELMO performs well for the analogy and concept categorization tasks (yielding the best results in particular on the Google analogy dataset). Although the results are not directly comparable because of differences in the dimensionality of the vectors and also in the collection of documents used in the pretrained ELMO vectors (Billion word benchmark as against DBPedia in our case), it could possibly be reasoned that the additional contextual information of the ELMO vectors turns out to be useful for in the analogy task.
Embedding Examples. Table 6 shows an example of the change in the neighbourhood of a sample word in the embedded space obtained by SGNS and word-node2vec. It can be seen from the table that word-node2vec is able to push relevant words, such as 'released' and 'song' within the top 5-NN of the word 'album'. Although the words 'promotional' and 'reissue' are related to 'album', the semantic association of 'released' and 'song' with 'album' is apparently higher. We found that the word 'song' occurs in the local context of the word 'album' only 133, 494 number of times out of a total number of 177, 487 instances of the word 'album'. This means that a significant percentage of times (almost 25%), 'song' co-occurs with 'album' at a document-level. Our embedding algorithm is able to leverage this information by making the vector for 'song' closer to 'album'. Sensitivity Analysis. Tables 3-5 show word-node2vec results with optimal parameter settings. We now investigate the effect of varying these parameters on each individual evaluation task. We observe that both term presence and term informativeness are important to model document-level co-occurrences as seen from the fact that the ρ and accuracy values decrease as α gets close to 0 or 1 (the 1st and 3rd plots from the left of Figure 1). Similarly, it can be seen that the results tend to improve with higher values of β, which confirms that direct associations between words in the word-node graph are more important than transitive ones (2nd plot from the left and the rightmost plot of Figure 1). However, second-order transitive associations are still important because the results tend to decrease for β close to 1.
Conclusions and Future work
We proposed a word embedding approach that leverages document-level non-local cooccurrences, in addition to the window-based local co-occurrences. We proposed a graph-based framework, in which words are represented as nodes and the edges between a pair of words reflect the degree of association between them. This association is a function of both the local and the document-level co-occurrences, which enables our approach to achieve 'the best of both worlds' in word embedding. Experiments show that our proposed method outperforms local approaches, namely word2vec, Glove and FastText, on a number of different tasks. Our approach also outperforms a naive black-box combination of embeddings obtained separately by local and documentlevel approaches. This proves the importance of addressing both these sources of information jointly in an embedding objective.
In future, we would like to explore ways of applying a similar graph based formalism for learning vectors for documents.
Figure 1 :
1Parameter sensitivity of word-node2vec on word prediction (left column) and word analogy (right column) tasks using WSIM (top row) and MSR (bottom row) datasets.
Table 1 :
1Dataset characteristics of DBPedia-2014.
Google Syntactic and Semantic Athens:Greece Berlin:X SemEval Syntactic and Semantic dog:bone bird:XDataset Composition
Example
MSR
Syntactic
good:better rough:X
Table 2 :
2Word analogy datasets overview.
Table 4 :
4Word analogy results.Method
Accuracy
AP
BLESS ESSLI 2b
SGNS
0.6194 0.7500
0.7500
Glove
0.6343 0.7200
0.7250
FastText
0.6119 0.7950
0.7250
ELMO
0.6368 0.7350
0.7500
LDA
0.3383 0.3900
0.6500
SGNS-LDA
0.5796 0.7850
0.7750
Node2vec
0.6355 0.7500
0.7350
Word-node2vec 0.6393 0.7950
0.7750
Table 5 :
5Concept categorization results.
Table 6 :
6Nearest neighbors of the word 'album' obtained by SGNS and word-node2vec.
http://downloads.dbpedia.org/2014/en/ long_abstracts_en.ttl.bz2 2 http://www.lextek.com/manuals/onix/ stopwords2.html
AcknowledgementsThis work was supported by Science Foundation Ireland as part of the ADAPT Centre (Grant No. 13/RC/2106) (www.adaptcentre.ie). This work started as an internship during the first author's visit to IBM Research Lab, Ireland.
Concept learning and categorization from the web. Abdulrahman Almuhareb, Massimo Poesio, Proc. of COGSCI. of COGSCIAbdulrahman Almuhareb and Massimo Poesio. 2005. Concept learning and categorization from the web. In Proc. of COGSCI, pages 103-108.
How we blessed distributional semantic evaluation. Marco Baroni, Alessandro Lenci, Proc. of the GEMS 2011 Workshop on GEometrical Models of Natural Language Semantics. of the GEMS 2011 Workshop on GEometrical Models of Natural Language SemanticsMarco Baroni and Alessandro Lenci. 2011. How we blessed distributional semantic evaluation. In Proc. of the GEMS 2011 Workshop on GEometrical Mod- els of Natural Language Semantics, pages 1-10.
Latent dirichlet allocation. David M Blei, Andrew Y Ng, Michael I Jordan, J. Mach. Learn. Res. 3David M. Blei, Andrew Y. Ng, and Michael I. Jordan. 2003. Latent dirichlet allocation. J. Mach. Learn. Res., 3:993-1022.
Multimodal distributional semantics. Elia Bruni, Nam Khanh Tran, Marco Baroni, J. Artif. Int. Res. 49Elia Bruni, Nam Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. J. Artif. Int. Res., 49:1-47.
Indexing by latent semantic analysis. Scott Deerwester, Susan T Dumais, George W Furnas, Thomas K Landauer, Richard Harshman, Journal of the American Society for Information Science. 416Scott Deerwester, Susan T. Dumais, George W. Fur- nas, Thomas K. Landauer, and Richard Harshman. 1990. Indexing by latent semantic analysis. Jour- nal of the American Society for Information Science, 41(6):391-407.
Retrofitting word vectors to semantic lexicons. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H Hovy, Noah A Smith, Proc. of NAACL HLT. of NAACL HLTManaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard H. Hovy, and Noah A. Smith. 2015. Retrofitting word vectors to semantic lexi- cons. In Proc. of NAACL HLT, pages 1606-1615.
Placing search in context: The concept revisited. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, Eytan Ruppin, Proc. of WWW 2014. of WWW 2014Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2014. Placing search in context: The concept revisited. In Proc. of WWW 2014, pages 406-414.
Word embedding based generalized language model for information retrieval. Debasis Ganguly, Dwaipayan Roy, Mandar Mitra, J F Gareth, Jones, Proc. of SIGIR'15. of SIGIR'15Debasis Ganguly, Dwaipayan Roy, Mandar Mitra, and Gareth J. F. Jones. 2015. Word embedding based generalized language model for information retrieval. In Proc. of SIGIR'15, pages 795-798.
Relation between plsa and nmf and implications. Eric Gaussier, Cyril Goutte, Proceedings of the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05. the 28th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR '05Eric Gaussier and Cyril Goutte. 2005. Relation be- tween plsa and nmf and implications. In Proceed- ings of the 28th Annual International ACM SIGIR Conference on Research and Development in Infor- mation Retrieval, SIGIR '05, pages 601-602.
Finding scientific topics. T L Griffiths, M Steyvers, Proceedings of the National Academy of Sciences. 101Suppl. 1T. L. Griffiths and M. Steyvers. 2004. Finding scien- tific topics. Proceedings of the National Academy of Sciences, 101(Suppl. 1):5228-5235.
Node2vec: Scalable feature learning for networks. Aditya Grover, Jure Leskovec, Proc. of the 22Nd ACM SIGKDD 2016. of the 22Nd ACM SIGKDD 2016Aditya Grover and Jure Leskovec. 2016. Node2vec: Scalable feature learning for networks. In Proc. of the 22Nd ACM SIGKDD 2016, pages 855-864.
A deep relevance matching model for ad-hoc retrieval. Jiafeng Guo, Yixing Fan, Qingyao Ai, W Bruce Croft, Proc. of CIKM '16. of CIKM '16Jiafeng Guo, Yixing Fan, Qingyao Ai, and W. Bruce Croft. 2016. A deep relevance matching model for ad-hoc retrieval. In Proc. of CIKM '16, pages 55- 64.
Probabilistic latent semantic analysis. Thomas Hofmann, Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI'99. the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI'99Morgan Kaufmann Publishers IncThomas Hofmann. 1999. Probabilistic latent semantic analysis. In Proceedings of the Fifteenth Conference on Uncertainty in Artificial Intelligence, UAI'99, pages 289-296. Morgan Kaufmann Publishers Inc.
How to evaluate word embeddings? on importance of data efficiency and simple supervised tasks. Stanislaw Jastrzebski, Damian Lesniak, Wojciech Marian Czarnecki, abs/1702.02170CoRRStanislaw Jastrzebski, Damian Lesniak, and Woj- ciech Marian Czarnecki. 2017. How to evalu- ate word embeddings? on importance of data efficiency and simple supervised tasks. CoRR, abs/1702.02170.
Armand Joulin, Edouard Grave, Piotr Bojanowski, arXiv:1612.03651Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprintArmand Joulin, Edouard Grave, Piotr Bojanowski, Matthijs Douze, Hérve Jégou, and Tomas Mikolov. 2016. Fasttext.zip: Compressing text classification models. arXiv preprint arXiv:1612.03651.
Semeval-2012 task 2: Measuring degrees of relational similarity. David A Jurgens, Peter D Turney, Saif M Mohammad, Keith J Holyoak, Proc. of the Sixth International Workshop on Semantic Evaluation, SemEval '12. of the Sixth International Workshop on Semantic Evaluation, SemEval '121Proc. of the First Joint Conference on Lexical and Computational SemanticsDavid A. Jurgens, Peter D. Turney, Saif M. Moham- mad, and Keith J. Holyoak. 2012. Semeval-2012 task 2: Measuring degrees of relational similarity. In Proc. of the First Joint Conference on Lexical and Computational Semantics -Volume 1: Proc. of the Main Conference and the Shared Task, and Volume 2: Proc. of the Sixth International Workshop on Se- mantic Evaluation, SemEval '12, pages 356-364.
Neural word embedding as implicit matrix factorization. Omer Levy, Yoav Goldberg, Advances in Neural Information Processing Systems. 27Omer Levy and Yoav Goldberg. 2014. Neural word embedding as implicit matrix factorization. In Ad- vances in Neural Information Processing Systems 27, pages 2177-2185.
Esslli workshop on distributional lexical semantics. Stefan Evert Marco Baroni, Alessandro Lenci, ESSLLI Workshop on Distributional Lexical Semantics. 101Stefan Evert. Marco Baroni and Alessandro Lenci. 2008. Esslli workshop on distributional lexical se- mantics. ESSLLI Workshop on Distributional Lexi- cal Semantics, 101:1-70.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S Corrado, Jeffrey Dean, Proc. of NIPS 2013. of NIPS 2013Tomas Mikolov, Ilya Sutskever, Kai Chen, Gregory S. Corrado, and Jeffrey Dean. 2013a. Distributed rep- resentations of words and phrases and their com- positionality. In Proc. of NIPS 2013, pages 3111- 3119.
Linguistic regularities in continuous space word representations. Tomas Mikolov, Yih Wen-Tau, Geoffrey Zweig, Proc. of NAACL 2013. of NAACL 2013Tomas Mikolov, Wen-tau Yih, and Geoffrey Zweig. 2013b. Linguistic regularities in continuous space word representations. In Proc. of NAACL 2013, pages 746-751.
Siamese recurrent architectures for learning sentence similarity. Jonas Mueller, Aditya Thyagarajan, Proc. of AAAI'16. of AAAI'16Jonas Mueller and Aditya Thyagarajan. 2016. Siamese recurrent architectures for learning sentence similar- ity. In Proc. of AAAI'16, pages 2786-2792.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proc. of EMNLP. of EMNLPJeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proc. of EMNLP 2014, pages 1532-1543.
Deep contextualized word representations. Matthew E Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, Luke Zettlemoyer, Proc. of NAACL. of NAACLMatthew E. Peters, Mark Neumann, Mohit Iyyer, Matt Gardner, Christopher Clark, Kenton Lee, and Luke Zettlemoyer. 2018. Deep contextualized word rep- resentations. In Proc. of NAACL 2018.
A language modeling approach to information retrieval. M Jay, W Bruce Ponte, Croft, Proceedings of the 21st Annual International ACM SI-GIR Conference on Research and Development in Information Retrieval, SIGIR '98. the 21st Annual International ACM SI-GIR Conference on Research and Development in Information Retrieval, SIGIR '98ACMJay M. Ponte and W. Bruce Croft. 1998. A language modeling approach to information retrieval. In Pro- ceedings of the 21st Annual International ACM SI- GIR Conference on Research and Development in Information Retrieval, SIGIR '98, pages 275-281. ACM.
Word vector compositionality based relevance feedback using kernel density estimation. Dwaipayan Roy, Debasis Ganguly, Mandar Mitra, J F Gareth, Jones, Proc. of CIKM'16. of CIKM'16ACMDwaipayan Roy, Debasis Ganguly, Mandar Mitra, and Gareth J. F. Jones. 2016. Word vector composition- ality based relevance feedback using kernel density estimation. In Proc. of CIKM'16, pages 1281-1290. ACM.
Learning the curriculum with bayesian optimization for taskspecific word representation learning. Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian Macwhinney, Chris Dyer, Proc. of ACL'16. of ACL'16Yulia Tsvetkov, Manaal Faruqui, Wang Ling, Brian MacWhinney, and Chris Dyer. 2016. Learning the curriculum with bayesian optimization for task- specific word representation learning. In Proc. of ACL'16, pages 130-139. |
248,780,565 | Data Augmentation and Learned Layer Aggregation for Improved Multilingual Language Understanding in Dialogue | Scaling dialogue systems to a multitude of domains, tasks and languages relies on costly and time-consuming data annotation for different domain-task-language configurations. The annotation efforts might be substantially reduced by the methods that generalise well in zero-and few-shot scenarios, and also effectively leverage external unannotated data sources (e.g., Web-scale corpora). We propose two methods to this aim, offering improved dialogue natural language understanding (NLU) across multiple languages: 1) Multi-SentAugment, and 2) LayerAgg. Multi-SentAugment is a self-training method which augments available (typically few-shot) training data with similar (automatically labelled) in-domain sentences from large monolingual Web-scale corpora. LayerAgg learns to select and combine useful semantic information scattered across different layers of a Transformer model (e.g., mBERT); it is especially suited for zero-shot scenarios as semantically richer representations should strengthen the model's cross-lingual capabilities. Applying the two methods with state-of-the-art NLU models obtains consistent improvements across two standard multilingual NLU datasets covering 16 diverse languages. The gains are observed in zero-shot, few-shot, and even in full-data scenarios. The results also suggest that the two methods achieve a synergistic effect: the best overall performance in few-shot setups is attained when the methods are used together. | [
221535522,
224725731,
155092004,
222290596,
202775306,
233241004,
218470125,
218977361,
53110354,
216641842,
235303641,
216036089,
201660404,
19166969,
162183964,
56895551,
17625727
] | Data Augmentation and Learned Layer Aggregation for Improved Multilingual Language Understanding in Dialogue
2017 -2033 May 22-27, 2022
Evgeniia Razumovskaia
Language Technology Lab
University of Cambridge
Ivan Vulić
Language Technology Lab
University of Cambridge
Anna Korhonen
Language Technology Lab
University of Cambridge
Data Augmentation and Learned Layer Aggregation for Improved Multilingual Language Understanding in Dialogue
Association for Computational Linguistics: ACL 2022
2017 -2033 May 22-27, 2022
Scaling dialogue systems to a multitude of domains, tasks and languages relies on costly and time-consuming data annotation for different domain-task-language configurations. The annotation efforts might be substantially reduced by the methods that generalise well in zero-and few-shot scenarios, and also effectively leverage external unannotated data sources (e.g., Web-scale corpora). We propose two methods to this aim, offering improved dialogue natural language understanding (NLU) across multiple languages: 1) Multi-SentAugment, and 2) LayerAgg. Multi-SentAugment is a self-training method which augments available (typically few-shot) training data with similar (automatically labelled) in-domain sentences from large monolingual Web-scale corpora. LayerAgg learns to select and combine useful semantic information scattered across different layers of a Transformer model (e.g., mBERT); it is especially suited for zero-shot scenarios as semantically richer representations should strengthen the model's cross-lingual capabilities. Applying the two methods with state-of-the-art NLU models obtains consistent improvements across two standard multilingual NLU datasets covering 16 diverse languages. The gains are observed in zero-shot, few-shot, and even in full-data scenarios. The results also suggest that the two methods achieve a synergistic effect: the best overall performance in few-shot setups is attained when the methods are used together.
Introduction
The aim of Natural Language Understanding (NLU) in task-oriented dialogue systems is to identify the user's need from their utterance (Xu et al., 2020). This comprises the following crucial information: 1) intents, what the user intends to do, and 2) (typically predefined) slots, associated arguments of the intent (Tur et al., 2010;Tur and De Mori, 2011) which need to be filled with specific values. Intent detection is often framed as a standard sentence classification task, where every sentence maps to one or more intent classes; slot labelling is typically cast as a sequence labelling task, where each word is labelled with a BIO-style slot tag (Bunk et al., 2020), see Figure 1.
The supervised models for NLU in English are plentiful and achieve extremely high accuracy (Louvan and Magnini, 2020a;Qin et al., 2021). At the same time, porting an NLU system to any new domain and language requires collecting a large indomain dataset, and training a model for the target language (Xu et al., 2020). Such in-domain annotations in multiple languages are extremely expensive and time-consuming (Rastogi et al., 2020), also reflected in the fact that large enough dialogue NLU datasets for other languages are still few and far between (Razumovskaia et al., 2021). This in turn creates the demand for strong multilingual and crosslingual methods which generalise well and learn effectively in zero-shot and few-shot scenarios. In this work, we propose two methods to this end: 1) Multi-SentAugment, a weakly supervised data augmentation method which improves the capability of current state-of-the-art (SotA) dialogue NLU in few-shot scenarios via self-training; 2) Layer-Agg learns to effectively leverage and combine the knowledge stored across different layers of a pretrained multilingual Transformer (e.g., mBERT).
The main goal of Multi-SentAugment is to reduce the required amount of labelled data and manual annotation labour by harvesting the large pool of unannotated data, and carefully selecting relevant in-domain examples which can then be automatically labelled (Du et al., 2021). In a nutshell, domain-relevant unannotated sentences are first retrieved from a large multilingual sentence bank. The synthetic labels for the data are then generated by a teacher model, previously trained with available annotated data. A final student model is then trained on the combination of synthetically labeled and annotated data. To the best of our knowledge, our work is the first to mine large unannotated monolingual resources in multiple languages to augment data for multilingual dialogue NLU.
The goal of LayerAgg is to leverage useful lexical and other semantic information scattered across layers (Tenney et al., 2019;) of a pretrained multilingual Transformer. Moving away from the standard fine-tuning practice of using only the representations from the top layer, we hypothesise that the model's cross-lingual capabilities can be increased by forcing it (i) to propagate semantic information from lower layers, as well as (ii) to aggregate/combine semantic information from all its layers. In a nutshell, we propose to use a multilingual encoder with cross-layer Transformer, which selects and combines the knowledge from all layers of a pretrained model during fine-tuning.
Our experiments show that Multi-SentAugment gives consistent improvements in few-shot and fulldata scenarios on the two available multilingual dialogue NLU datasets: MultiATIS++ (Xu et al., 2020) and xSID (van der Goot et al., 2021). The results further indicate that LayerAgg improves zero-shot performance on the same datasets. Finally, since the two methods can be independently applied to SotA NLU models, we demonstrate that they yield a synergistic effect: the highest scores on average are achieved with their combination.
Contributions. 1) Multi-SentAugment is a simple yet effective data augmentation approach which leverages unannotated data from large Web-scale corpora to boost multilingual dialogue NLU. 2) LayerAgg is a novel cross-layer attention method which learns to effectively combine useful semantic information from multiple layers of a multilingual Transformer.
3) The two methods applied with SotA NLU models obtain consistent gains across two standard multilingual NLU datasets in zeroshot, and 8 languages in few-shot, and full-data setups, boosting the capability of cross-lingual dialogue in resource-lean scenarios.
Related Work and Background
Multilingual NLU for Dialogue Systems is usually divided into two tasks: intent detection and slot labelling (Tur et al., 2010;Xu et al., 2020). In "pre-Transformer" times, the methods for training multilingual NLU systems were based on static multilingual word vectors (Mrkšić et al., 2017;Upadhyay et al., 2018;Schuster et al., 2019), lexicon alignment (Liu et al., 2019b,a), and model or annotation projection via parallel data (Kulshreshtha et al., 2020;López de Lacalle et al., 2020).
Transfer learning with large pretrained multilingual Transformer-based language models (LMs) such as mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020a) has demonstrated currently unmatched performance in many NLU tasks (Liang et al., 2020;Hu et al., 2020;Ruder et al., 2021), including intent classification and slot labelling (Zhang et al., 2019;. Fine-tuning a large multilingual LM has become a standard for multilingual NLU (Zhang et al., 2019;Kulshreshtha et al., 2020). However, the excessively high data annotation costs for multiple domains and languages still hinder progress in multilingual dialogue (Razumovskaia et al., 2021). In this paper, unlike prior work, we propose to use external unannotated data to mine and automatically label in-domain in-language examples which aid learning in low-data regimes across multiple languages.
Data Augmentation in Multilingual NLU, as well as data augmentation methods in NLP in general, aim to produce additional training data automatically, without the need to manually label it. In monolingual English-only settings, English NLU data has been augmented by generating additional data with a large monolingual language model (Peng et al., 2020) such as BERT (Devlin et al., 2019) or GPT-2 (Radford et al., 2019), or from atomic templates (Zhao et al., 2019). In multilingual settings, data augmentation methods for NLU include simple text span substitution and syntactic structure manipulation (Louvan and Magnini, 2020c,b). Recently, code switching (Krishnan et al., 2021) and generating translations through a pivot language (Kaliamoorthi et al., 2021) have also been proposed as data augmentation methods.
The previous work relies on (i) additional components such as syntactic parsers or POS taggers, or (ii) parallel and code-switched data. However, they might be unavailable or of low-quality for many (low-resource) languages. In contrast, Multi-SentAugment relies on the cheapest and largest resource available: monolingual Web-crawled data; it disposes of any dependency parsers and taggers, which makes it more widely applicable. Mining knowledge from Web-scale data was shown effective in various (non-dialogue) text classification tasks (Du et al., 2021) and in MT (Wu et al., 2019). 1 Layer Aggregation in Pretrained LMs. A standard practice is to use the output of the final/top layer of a pretrained LM as input into task-specific classifiers (Devlin et al., 2019;Sun et al., 2019). At the same time, prior work shows that most of (decontextualised) lexical information (Ethayarajh, 2019; and word-order information (Lin et al., 2019) is localised in lower layers of BERT. Middle layers usually encode syntactic information (Hewitt and Manning, 2019; Jawahar et al., 2019) while (contextual) semantic information is spread across all the layers of a pretrained LM (Tenney et al., 2019), with higher layers capturing increasingly abstract language phenomena (Lin et al., 2019;Rogers et al., 2020;Tenney et al., 2019). Kondratyuk and Straka (2019) showed that using a weighted combination of all layers works well in cross-lingual settings for a syntactic task of dependency parsing. In addition, they proposed to use layer dropout to redistribute how the information is localised in a fine-tuned BERT model.
In order to 'unlock' additional semantic knowledge from other layers, we propose an additional Transformer encoder with cross-layer attention as a layer aggregation mechanism. We hypothesise that relying only on the representations from the top layer dilutes mBERT's lexical and semantic information. Moreover, we expect lexically and semantically richer representations to be especially useful for zero-shot settings: aggregated (contextualised) semantic information from lower layers could help correctly identify the intent of the sentence, while lexical information could help identify the slot tag for different languages. 2
Methodology
We assume a standard state-of-the-art approach to dialogue NLU in multiple languages (Xu et al., 2020), based on fine-tuning pretrained multilingual LMs on the tasks of intent detection and slot labelling. Following Xu et al. (2020), we fine-tune the pretrained LM in a standard supervised fashion, with task-specific linear layers stacked on top.
Separate NLU Models. The multilingual encoder for each NLU task is fine-tuned separately, and there is no knowledge exchange (but also no noise or destructive inference) between the two tasks. We adopt a standard task-specific fine-tuning setup (Xu et al., 2020;Siddhant et al., 2020).
Joint NLU Model. Another line of recent work pursued joint modelling of the two tasks, motivated by the intuitive correlation between them. 3 In this work, we follow a standard joint modelling procedure (Xu et al., 2020;Hardalov et al., 2020;Krishnan et al., 2021), where the model consists of a shared multilingual encoder followed by taskspecific linear layers for intent classification and slot labelling. The loss is then simply a sum of two task-dedicated losses. In our experiments, we use mBERT (Devlin et al., 2019) and XLM-R (Conneau et al., 2020a) as the encoder.
Multi-SentAugment ( §3.1) and LayerAgg ( §3.2) are then applied to the joint NLU model, while we also provide detailed comparisons to the separate NLU models as baselines in zero-shot setups.
Multi-SentAugment
Large Web-crawled datasets have been proven useful for extracting additional data for classification tasks in English (Du et al., 2021). We adapt the approach of Du et al. (2021) to multilingual dialogue NLU, that is, we propose to use large Web-crawled corpora to obtain additional in-domain data for dialogue NLU tasks in multiple languages.
For each language l we are given: 1) some annotated training data D l which consists of |D l | sentences x 1 , ..., x |D l | , each labelled with intent class and slot labels (see Figure 1); 2) a large Webcrawled corpus U l consisting of |U l | sentences s 1 , ..., s |U l | ; 3) off-the-shelf multilingual sentence encoder F fine-tuned towards semantic sentence similarity, that is, to produce semantic embeddings of input sentences (Reimers and Gurevych, 2020). The data augmentation process then consists of 1) unsupervised data retrieval and 2) self-training. The aim of unsupervised data retrieval is to construct an in-domain unannotated set of sentences by filtering the sentences from U l . The process is formulated by the following equations:
X = F(x 1 , . . . , x |D l | ); U = F(s 1 , ..., s |U l | ); σ = UX U X > θ;
θ is a similarity threshold for sentence filtering: a sentence s i will be added into the in-domain dataset if there is an annotated sentence x j ∈ D l such that σ i,j > θ. As a result of data retrieval, we obtain a set of in-domain unannotated sentences which are similar to annotated training data D l . At self-training, we first fine-tune a joint NLU model on annotated D l data. We then use this model to annotate the retrieved in-domain sentences. As our final NLU model, we fine-tune a new joint NLU model on the full dataset, combining the D l set and filtered and annotated sentences.
LayerAgg
To ensure the propagation and use of lexical and semantic information from lower layers, we propose a simple layer aggregation technique based on cross-layer attention (Vaswani et al., 2017), illustrated in Figure 2. In short, let w ij be a representation of a word (or WordPiece; Devlin et al. (2019)) at position i at layer j, j = 1, . . . , N l , where N l is the number of layers in the pretrained LM (e.g., N l = 12 for mBERT). Layer-aggregated representation w i of the input w i is computed as follows:
w i = T(w i,1 :N l ),(1)
where w i,1 :N l is a sequence comprising all (ordered) w ij per-layer representations, and T is a cross-layer Transformer encoder. In essence, T effectively always operates over a sequence of length N l : it outputs the representations from all layers, but which have now been self-attended. We then feed the last item (i.e., N l -th item) of the sequence representation output by the Transformer T into the task-specific classifiers. Relying on the N l -th output representation, the model is forced to incorporate the information from all layers into the final representation of the input token w i . The parameters of T are also updated during fine-tuning.
Experimental Setup
Evaluation Datasets comprise two standard multilingual dialogue NLU datasets: MultiATIS++ (Xu et al., 2020) and xSID (van der Goot et al., 2021), created by translating monolingual labelled English data into target languages. MultiATIS++ is a single domain (airline) dataset while xSID covers 7 domains including alarm, weather, music, events and reminder. xSID is an evaluation only dataset, i.e., it contains training data only for English. The statistics of the datasets are presented in Table 1. The datasets consist of sentences each labelled with an intent class and BIO slot tags/labels, see Figure 1.
Large (Multilingual) Sentence Banks. We use the CC-100 dataset (Conneau et al., 2020a; Wenzek et al., 2020), which comprises monolingual CommonCrawl data in 116 languages. For computational tractability with resources at our disposal, we rely on the smaller CC-100-100M dataset, a random sample from the full CC-100 4 spanning 100M sentences in each language. CC-100 covers multiple domains, language styles and variations.
Multi-SentAugment: Setup. Unless noted otherwise, we use the LASER multilingual sentence encoder (Artetxe and Schwenk, 2019), pretrained on 93 languages with a sentence similarity objective on parallel data. The similarity threshold θ is set to 0.8. Besides the basic setup, (i) we also analyse the impact of the sentence encoder by running experiments with another SotA multilingual encoder: LaBSE (Feng et al., 2020;Litschko et al., 2021); (ii) we apply an additional filtering step based on the intent confidence of the teacher model, retaining only high-confidence examples. 5
LayerAgg. The aggregator Transformer T contains a single 512-dimensional layer with 4 attention heads. Here, we remind the reader that the N l -th item of T's output sequence is fed to the taskspecific layers; see again §3.2). LayerAgg adds up to 2 million additional parameters, which is ≈ 1% of the total number of trainable parameters in the baseline model. In addition, we present an extensive comparison with a standard layer aggregation method of Kondratyuk (2019), which is based on cross-layer attention.
Fine-Tuning Setup. 1) In the zero-shot setup, we train the model on the English training data and evaluate on other (target) languages. 2) In the few-shot setup, unless stated otherwise, we add 10 target-language examples (i.e., shots) per intent to the English training data. 3) In the full-data setup, we use the entire training set of the target language (without any English data). For unsupervised sentence retrieval in few-shot and full-data setups, we only use the examples in the target language as our query set D l (see §3.1). In all experiments, we evaluate on the validation set after each epoch, and train for 20 epochs with a patience of 5 epochs, with Adam (Kingma and Ba, 2015) as the optimiser, batches of 32; the learning rate is 5e − 5, and the warm-up rate is 0.1. We experiment with mBERT Base and XLM-R Base as multilingual encoders. The hyperparameters were set to the values corresponding to those in Xu et al. (2020).
Results and Discussion
Joint vs Separate NLU. We first establish the performance of joint versus separate baseline NLU models. The main results, provided in Tables 2 and 3, indicate that joint NLU training performs better on intent classification while separate taskspecific NLU models are more beneficial on slot labelling. Our results corroborate the findings from prior work (Schuster et al., 2019;He et al., 2020;Weld et al., 2021). We suspect that joint training works better for intent classification as sentencelevel representations are enriched with lexical information through the additional slot-labelling loss. At the same time, separate training attains stronger performance in slot labelling as it retains more taskspecific representations for each token.
Impact of LayerAgg. The motivation behind Lay-erAgg is to combine the strengths of both joint and separate training, that is, having sentence-level representations enriched with lexical information while keeping token representations specified. The benefits of LayerAgg in both tasks in zero-shot setups are indicated by the results in Tables 2-3. We observe large improvements with LayerAgg, both on average and for the large number of individual target languages. It is worth noting that LayerAgg provides gains also with both underlying multilingual encoders. Besides that, adding LayerAgg also yields more stable performance of the joint model in general (e.g., compare the scores on Japanese and Turkish slot labelling without and with Lay-erAgg). The gains with LayerAgg also persist in few-shot and full-data setups, as shown in Figure 3.
+LayerAgg versus +Attn. Table 2 also presents a comparison of two layer aggregation techniques: cross-layer attention from Kondratyuk and Straka (2019) (+Attn), now adapted to dialogue NLU tasks, and LayerAgg. While both methods produce gains over the Joint baseline in several target languages, LayerAgg yields much more substantial gains, and is more robust across different model configurations and tasks. While the Attn aggregation simply provides a weighted sum of information encoded across Transformer layers based on its importance to the final prediction, LayerAgg has the capability to analyse and aggregate the information as it evolves between layers (Voita et al., 2019).
Impact of Multi-SentAugment. The results in Figure 3 suggest that Multi-SentAugment is indeed useful as data augmentation for the two NLU tasks, both in few-shot and full-data scenarios, and for different target languages. 6 Achieving slight gains in full-data scenarios implies that mining additional monolingual data is beneficial even when a large in-domain dataset in the target language is avail- Table 3: Zero-shot results on xSID. The average is computed across target languages (excluding English). Highest scores in each task for every encoder per column in bold. The results are averaged across 5 random seeds. able. Notably, we observe larger gains for Turkish and Hindi in Figure 3d: it is expected due to the fact that MutiATIS++ contains a smaller number of sentences for tr and hi than for the other target languages. Finally, the impact of filtering by teacher confidence (see §3.1) is inconsistent for intent classification (i.e., it seems to be target language-dependent) while it improves the results for slot labelling on average. Encouraged by these insights, we will investigate more sophisticated indomain sentence mining methods in future work.
Combining Multi-SentAugment and LayerAgg results in a synergistic effect, based on the additional slight gains observed in Figure 3 (the full results are available in the Appendix C, including the MultiSentAugment results in 5-shot and 20shot setups in the Appendix D). This is expected as the two methods offer distinct enhancements of the base joint NLU model: (i) Multi-SentAugment includes more diverse sentences and lexical information into the training data (i.e., enhancement at the input level), while (ii) LayerAgg aims to select and combine semantic information spread across mBERT's layers (i.e., feature-level enhancement).
Zero-Shot vs Few-Shot. As discussed before, using Multi-SentAugment and LayerAgg seems to benefit the base NLU model both in low-data and full-data setups; we observe gains also in 5-shot and 20-shot setups (see Appendix D). Similar to other NLP tasks (e.g., named entity recognition, parsing, QA) (Lauscher et al., 2020), few-shot setups (e.g., even having only 5 examples per intent or ≈80 annotated sentences in total) yield huge benefits over zero-shot setups (see Table 4; compare the results in Table 2 and Figure 3). Our results provide another empirical proof calling for more modelling effort in more realistic few-shot cross-lingual transfer setups (Lauscher et al., 2020;Zhao et al., 2021) in future work. We also observe that the results in 10-shot setups when both Multi-SentAugment and LayerAgg are used are mostly on par with the results in 20-shot setups with the base NLU model. In general, this finding validates that the proposed methods can indeed reduce the manual annotation effort.
Analysis and Further Discussion
Target Language Analysis. While both Multi-SentAugment and LayerAgg are language-agnostic techniques per se, the actual transfer results also depend on the linguistic properties of the source and Table 6: F 1 scores in a lexical probe of detecting the 1,000 most frequent words on MultiATIS++. target languages. We thus aim to answer the following question: Which languages benefit most from Multi-SentAugment and LayerAgg? To this end, we study the correlations between zero-shot and fewshot transfer performance (i.e., gains over the joint baseline when using the two methods) and sourceto-target language distance, which is based on the language vectors obtained from the URIEL typological database (Littell et al., 2017). Following Lauscher et al. (2020), we consider the following linguistic features: syntax (SYN), encoding syntactic properties; language family memberships (FAM) and geographic locations (GEO).
The results are shown in Table 5. SYN similarity has the highest correlation with zero-shot performance gains in both NLU tasks. We suspect that this might stem from LayerAgg's prop-erty to selectively aggregate information from multiple layers, which is easier to learn if the input sequences have similar syntactic structures. In simple words, LayerAgg might benefit more if similar information is found at similar places in the input sentences. FAM and GEO similarities are more correlated with gains in few-shot settings. This might be due to the fact that languages which are similar genealogically (FAM) and geographically (GEO) have more common lexical stems. It means that Multi-SentAugment extracts sentences with lexically similar words which unlock the generalisation abilities of the model.
Does LayerAgg Enrich Semantic Content?
While the task results seem to suggest this, we design a probing experiment which aims to answer the following question: Do the representations obtained with LayerAgg really capture more semantic information? To this end, we first obtain representations of the 1,000 most frequent words (Conneau et al., 2018;Mehri and Eric, 2021) in Multi-ATIS++ 7 in each sentence using a frozen mBERT task-tuned on English, with and without LayerAgg. We then aim to identify which word was encoded by training a simple linear classifier. The rationale is that by storing more lexical information in the representations, similar words will obtain similar representations: consequently, the classifier should more easily identify the correct word.
The micro-averaged F 1 scores are shown in Table 6. The same positive trend with large gains in the classification score is observed in all languages, confirming our hypothesis. We note that the large gains are reported not only for English (which was used for task fine-tuning), but also in other languages, suggesting the benefits of Layer-Agg in boosting cross-lingual lexical capabilities of multilingual encoders in transfer scenarios.
Cross-lingual Similarity in LayerAgg. We now assess how LayerAgg captures cross-lingual representation similarity by comparing self-attention maps for different languages emerging from Transformer T. We analyse the similarity of representations of the source language (en) with each target language in MultiATIS++ and xSID using linear Centered Kernel Alignment (l-CKA, Kornblith et al. 2019), a standard tool for such analyses in Transformer-based models (Conneau et al., 2020b;. Linear CKA is a repre-7 For a word tokenised into more than 1 WordPiece, we obtain its vector by averaging its constituent WordPiece vectors. sentation similarity metric for representations obtained from neural networks. L-CKA is invariant to orthogonal transformation and isotopic scaling . More formally, it is defined as follows:
CKA(X, Y ) = ||Y T X|| 2 F ||X T X|| F ||Y T Y || F
where X, Y are input matrices. We measure 1) cross-lingual correspondence for slots where l-CKA is computed between the representations of the same slot 8 in different languages; 2) the correlation between the l-CKA scores and transfer performance.
The l-CKA scores for MultiATIS++ in Figure 4 reveal high similarities between self-attention maps for similar languages. For instance, the scores are high between Romance languages in Multi-ATIS++ and Germanic languages in XSID. At the same time, the scores are low between ja and Romance languages and between tr and all other, non-Turkic languages. Spearman's ρ correlation scores between the l-CKA scores and zero-shot transfer performance are also very strong. For MultiATIS++, ρ = 0.95 (intent classification) and ρ = 0.92 (slot labelling), while for xSID: ρ = 0.77 (intent classification) and ρ = 0.59 (slot labelling).
Another Multilingual Sentence Encoder? Intuitively, the effectiveness of Multi-SentAugment depends on the underlying multilingual sentence encoder F. We now analyse how much performance 8 Slot representation is the average of attention maps of tokens labelled with that slot. We cannot compare attention maps for each word/WordPiece directly: we lack alignments between the words across sentences in different languages. differs if we replace one state-of-the-art encoder (i.e., LASER) with another: LaBSE (Feng et al., 2020), running Multi-SentAugment with LaBSE in 3 languages from 3 different language families that also use different scripts -Turkish, Hindi and Japanese. The results in Table 7 do indicate some performance variance across tasks and languages: LaBSE is slightly better in full-data scenarios while LASER performs better in few-shot scenarios. In future work on Multi-SentAugment, we will investigate encoder ensembles, and we plan to make the mining process more scalable and quicker.
Conclusion and Future Work
We presented 1) LayerAgg, a layer aggregation method which learns to effectively combine useful semantic information from multiple layers of a pretrained multilingual Transformer, and 2) Multi-SentAugment, a data augmentation approach that leverages unannotated Web-scale monolingual corpora to reduce manual annotation efforts. Our results suggest that both methods, applied with stateof-the-art multilingual dialogue NLU models, yield performance benefits both for intent classification and for slot labelling. The methods obtain consistent gains in zero-shot, few-shot and full-data setups on 2 multilingual NLU datasets spanning 16 languages. In future work, we will investigate further applications of Multi-SentAugment in cross-lingual settings (e.g., by mining sentences in languages from the same language family). We will also extend the methods towards truly low-resource languages. The code is available online at: github.com/cambridgeltl/ MultiSentAugment_LayerAgg.
References
Mikel Artetxe and Holger Schwenk. 2019. Massively multilingual sentence embeddings for zeroshot cross-lingual transfer and beyond. Table 11: Full-data results on MultiATIS++. Acronyms: +MSA = +Multi-SentAugment; +MSA FILT = +Multi-SentAugment filtered by teacher model confidence; +LA = +LayerAgg; +LA +MSA = +LayerAgg +Multi-SentAugment; +LA +MSA FILT = +LayerAgg +Multi-SentAugment filtered by teacher model confidence. Highest scores in each task per column in bold. The underlying multilingual model is mBERT.
D 5-shot and 20-shot Results with Multi-SentAugment F l-CKA Similarities on xSID
Figure 1 :
1Illustration of two user utterances in the ATIS flight domain with associated intents and slot tags.
Figure 2 :
2Illustration of the LayerAgg method.
( a )Figure 3 :
a3Few-shot intent classification (b) Few-shot slot labelling (c) Full-data intent classification (d) Full-data slot labelling Few-shot and full-data results on MultiATIS++. BASE = joint training baseline; MSA = +Multi-SentAugment; MSA FILT = +Multi-SentAugment filtered by teacher model confidence; LA = +LayerAgg; LA MSA = +LayerAgg +Multi-SentAugment; LA MSA FILT = +LayerAgg +Multi-SentAugment filtered by teacher model confidence. Results are presented for mBERT, with same trends observed when using XLM-R. The full results for few-shot and full data scenarios are available in the Appendix C.
Figure 4 :
4l-CKA similarities of mean-pooled representations of slots between different languages in Multi-ATIS++. For a similar plot for xSID see the Appendix.
Figure 5 :
5l-CKA similarities of mean-pooled representations of slots between different languages in xSID.
Table 2 :
2Zero-shot results on MultiATIS++ (English is the source language in all experiments). The average is computed across target languages (excluding English). Highest scores in each task for every encoder per column in bold. The results are averaged across 5 random seeds. +Attn refers to using standard cross-layer attention as layer aggregation, as done in prior work (Kondratyuk and Straka, 2019).Target language ar
da
de
st
en
id
it
ja
kk
nl
sr
tr
zh
AVG
Intent classification (Accuracy × 100)
Joint mBERT
46.13 74.07 62.67 47.07 98.80 68.00 58.47 35.47 40.07 65.87 58.13 47.60 72.61 56.35
+LayerAgg
51.13 72.93 63.00 49.47 98.67 69.00 62.20 39.33 47.53 65.73 61.73 50.80 69.64 58.54
Joint XLM-R
51.07 86.40 70.73 48.20 98.73 81.87 69.13 39.60 45.53 79.20 70.07 72.00 77.60 65.95
+LayerAgg
57.40 86.60 73.00 53.33 98.80 83.27 73.07 46.67 48.80 80.27 72.33 75.93 85.60 69.69
Slot labelling (Slot F1 × 100)
Joint mBERT
19.98 34.66 35.86 17.39 95.37 29.45 34.63 23.28 33.58 38.37 25.74 32.90 63.80 32.47
+LayerAgg
21.00 36.21 37.97 18.51 94.27 28.74 35.50 30.19 35.58 38.91 25.79 35.32 62.00 33.77
Joint XLM-R
32.40 68.81 53.72 20.68 94.97 64.31 56.93 25.45 28.97 71.57 48.96 46.78 56.42 47.91
+LayerAgg
35.36 68.50 52.16 21.24 95.67 66.21 56.78 23.68 28.60 68.10 50.57 47.91 56.96 48.01
Table 4 :
4Impact of the amount of annotated examples
in the target language. The results are averages across
8 target languages on MultiATIS++ (Xu et al., 2020)
with the baseline Joint NLU model (with mBERT as
the multilingual encoder).
Data setup Task
Method
SYN
FAM
GEO
Zero-shot
Intent
classification
LayerAgg
-0.9356 -0.5252 -0.6849
Slot
labelling
LayerAgg
0.6787
0.5392
-0.0509
Few-shot
Intent
classification
LayerAgg
-0.1970 -0.2830 -0.1556
Multi-SentAugment 0.2433
0.0497
-0.5229
LayerAgg
+
Multi-SentAugment
0.5274
0.0192
-0.1298
Slot
labelling
LayerAgg
-0.4227 -0.3112 -0.9544
Multi-SentAugment -0.0032 0.4203
0.3934
LayerAgg
+
Multi-SentAugment
0.1525
-0.1367 -0.6525
Table 5 :
5Correlation between performance gains provided by each method (LayerAgg, Multi-SentAugment, and their combination) on MultiATIS++ and language distance scores between English as the source language and target languages, based on different typological features from URIEL (SYN, FAM, GEO).de
en
es
fr
hi
pt
tr
AVG
Joint
86.96 86.03 75.12 92.31 90.0
86.64 53.69 81.54
+LayerAgg 97.83 97.53 83.19 95.33 91.16 89.48 58.49 87.57
Table 7 :
7A comparison of LASER and LaBSE as under-
lying encoders for Multi-SentAugment. A model vari-
ant without LayerAgg used; very similar trends are ob-
served with the +LayerAgg variant (see the Appendix).
Simon Kornblith, Mohammad Norouzi, Honglak Lee, and Geoffrey Hinton. 2019. Similarity of neural network representations revisited. In International Conference on Machine Learning, pages 3519-3529. PMLR. Jitin Krishnan, Antonios Anastasopoulos, Hemant Purohit, and Huzefa Rangwala. 2021. Multilingual code-switching for zero-shot cross-lingual intent prediction and slot filling. arXiv preprint arXiv:2103.07792. Saurabh Kulshreshtha, Jose Luis Redondo Garcia, and Ching-Yun Chang. 2020. Cross-lingual alignment methods for multilingual BERT: A comparative study. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 933-942, Online. Association for Computational Linguistics. Anne Lauscher, Vinit Ravishankar, Ivan Vulić, and Goran Glavaš. 2020. From zero to hero: On the limitations of zero-shot language transfer with multilingual Transformers. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4483-4499, Online. Association for Computational Linguistics. Yaobo Liang, Nan Duan, Yeyun Gong, Ning Wu, Fenfei Guo, Weizhen Qi, Ming Gong, Linjun Shou, Daxin Jiang, Guihong Cao, Xiaodong Fan, Ruofei Zhang, Rahul Agrawal, Edward Cui, Sining Wei, Taroon Bharti, Ying Qiao, Jiun-Hung Chen, Winnie Wu, Shuguang Liu, Fan Yang, Daniel Campos, Rangan Majumder, and Ming Zhou. 2020. XGLUE: A new benchmark datasetfor cross-lingual pre-training, understanding and generation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 6008-6018, Online. Association for Computational Linguistics. Yongjie Lin, Yi Chern Tan, and Robert Frank. 2019. Open sesame: Getting inside BERT's linguistic knowledge. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 241-253, Florence, Italy. Association for Computational Linguistics.Transac-
tions of the Association for Computational Linguis-
tics, 7:597-610.
Tanja Bunk, Daksh Varshneya, Vladimir Vlasov,
and Alan Nichol. 2020. DIET: Lightweight lan-
guage understanding for dialogue systems. CoRR,
abs/2004.09936.
Alexis Conneau, Kartikay Khandelwal, Naman Goyal,
Vishrav Chaudhary, Guillaume Wenzek, Francisco
Guzmán, Edouard Grave, Myle Ott, Luke Zettle-
moyer, and Veselin Stoyanov. 2020a. Unsupervised
cross-lingual representation learning at scale. In
Proceedings of the 58th Annual Meeting of the Asso-
ciation for Computational Linguistics, pages 8440-
8451, Online. Association for Computational Lin-
guistics.
Alexis Conneau, German Kruszewski, Guillaume Lam-
ple, Loïc Barrault, and Marco Baroni. 2018. What
you can cram into a single $&!#* vector: Probing
sentence embeddings for linguistic properties. In
Proceedings of the 56th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), pages 2126-2136, Melbourne, Aus-
tralia. Association for Computational Linguistics.
Alexis Conneau, Shijie Wu, Haoran Li, Luke Zettle-
moyer, and Veselin Stoyanov. 2020b. Emerging
cross-lingual structure in pretrained language mod-
els. In Proceedings of the 58th Annual Meeting
of the Association for Computational Linguistics,
pages 6022-6034, Online. Association for Compu-
tational Linguistics.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, and
Kristina Toutanova. 2019. BERT: Pre-training of
deep bidirectional transformers for language under-
standing. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4171-4186, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Jingfei Du, Edouard Grave, Beliz Gunel, Vishrav
Chaudhary, Onur Celebi, Michael Auli, Veselin
Stoyanov, and Alexis Conneau. 2021. Self-training
improves pre-training for natural language under-
standing. In Proceedings of the 2021 Conference of
the North American Chapter of the Association for
Computational Linguistics: Human Language Tech-
nologies, pages 5408-5418, Online. Association for
Computational Linguistics.
Kawin Ethayarajh. 2019. How contextual are contex-
tualized word representations? comparing the geom-
etry of BERT, ELMo, and GPT-2 embeddings. In
Proceedings of the 2019 Conference on Empirical
Methods in Natural Language Processing and the
9th International Joint Conference on Natural Lan-
guage Processing (EMNLP-IJCNLP), pages 55-65,
Hong Kong, China. Association for Computational
Linguistics.
Fangxiaoyu Feng, Yinfei Yang, Daniel Cer, Naveen
Arivazhagan, and Wei Wang. 2020. Language-
agnostic BERT sentence embedding.
CoRR,
abs/2007.01852.
Goran Glavaš and Ivan Vulić. 2021. Is supervised syn-
tactic parsing beneficial for language understanding
tasks? an empirical investigation. In Proceedings of
the 16th Conference of the European Chapter of the
Association for Computational Linguistics: Main
Volume, pages 3090-3104, Online. Association for
Computational Linguistics.
Momchil Hardalov, Ivan Koychev, and Preslav Nakov.
2020. Enriched pre-trained transformers for joint
slot filling and intent detection. arXiv preprint
arXiv:2004.14848.
Keqing He, Yuanmeng Yan, and Weiran Xu. 2020.
Adversarial cross-lingual transfer learning for slot
tagging of low-resource languages. In 2020 In-
ternational Joint Conference on Neural Networks
(IJCNN), pages 1-8. IEEE.
John Hewitt and Christopher D. Manning. 2019. A
structural probe for finding syntax in word repre-
sentations. In Proceedings of the 2019 Conference
of the North American Chapter of the Association
for Computational Linguistics: Human Language
Technologies, Volume 1 (Long and Short Papers),
pages 4129-4138, Minneapolis, Minnesota. Associ-
ation for Computational Linguistics.
Junjie Hu, Sebastian Ruder, Aditya Siddhant, Gra-
ham Neubig, Orhan Firat, and Melvin Johnson.
2020. XTREME: A massively multilingual multi-
task benchmark for evaluating cross-lingual gener-
alisation. In International Conference on Machine
Learning, pages 4411-4421. PMLR.
Ganesh Jawahar, Benoît Sagot, and Djamé Seddah.
2019. What does BERT learn about the structure
of language? In Proceedings of the 57th Annual
Meeting of the Association for Computational Lin-
guistics, pages 3651-3657, Florence, Italy. Associa-
tion for Computational Linguistics.
Prabhu Kaliamoorthi, Aditya Siddhant, Edward Li,
and Melvin Johnson. 2021. Distilling large lan-
guage models into tiny and effective students using
pQRNN. arXiv preprint arXiv:2101.08890.
Diederik P. Kingma and Jimmy Ba. 2015. Adam: A
method for stochastic optimization. In Proceedings
of ICLR 2015.
Dan Kondratyuk. 2019. Cross-lingual lemmatization
and morphology tagging with two-stage multilin-
gual BERT fine-tuning. In Proceedings of the 16th
Workshop on Computational Research in Phonetics,
Phonology, and Morphology, pages 12-18, Florence,
Italy. Association for Computational Linguistics.
Dan Kondratyuk and Milan Straka. 2019. 75 lan-
guages, 1 model: Parsing universal dependencies
universally. In Proceedings of the 2019 Confer-
ence on Empirical Methods in Natural Language
Processing and the 9th International Joint Confer-
ence on Natural Language Processing (EMNLP-
IJCNLP), pages 2779-2795.
A Language Codes
en
English
ar
Arabic
da
Danish
de
German
de-st
South Tyrolean
German dialect
es
Spanish
fr
French
hi
Hindi
id
Indonesian
it
Italian
ja
Japanese
kk
Kazakh
nl
Dutch
pt
Portuguese
sr
Serbian
tr
Turkish
zh
Chinese
th
Thai
Table 8 :
8Language codes used in the paper.B Training Hyperparameters
Hyperparameter
Value
Optimizer
Adam
Learning Rate
5e-5
Batch Size
32
BERT model
BERT base;
multilingual cased
XLM-R model
XLM-R base
Table 9 :
9Training hyperparameters.C Full Results for Full-Data and 10-shot SetupsIntent classification (Accuracy × 100)Target language
de
es
fr
hi
ja
pt
tr
zh
AVG
Joint
96.08
95.07
98.99
79.13
78.12
90.59
73.19
97.09
88.53
+MSA
97.09
96.86
97.42
81.76
79.28
95.74
78.87
94.18
90.15
+MSA FILT
97.31
96.64
98.21
78.56
82.53
94.40
79.15
92.61
89.93
+LA
98.10
95.07
97.20
83.20
79.13
95.96
71.49
95.52
89.46
+LA +MSA
92.95
96.75
97.42
84.38
79.73
96.87
72.34
95.97
89.55
+LA +MSA FILT 97.87
91.94
97.47
84.84
79.73
95.63
78.87
96.08
90.30
Slot labelling (Slot F1 × 100)
Joint
85.41
80.52
82.16
74.12
78.63
83.34
71.65
80.22
79.51
+MSA
82.95
80.70
82.41
73.34
81.92
84.10
68.11
80.85
79.30
+MSA FILT
86.19
81.90
82.79
76.02
82.55
83.62
66.82
76.18
79.51
+LA
85.50
82.13
82.62
73.80
75.64
84.37
71.92
73.40
78.67
+LA +MSA
85.48
83.10
82.97
72.87
80.99
84.46
68.46
76.30
79.33
+LA +MSA FILT 85.89
80.38
81.45
76.71
77.92
85.00
74.24
76.34
79.74
Table 10 :
10Few-shot results on MultiATIS++. Acronyms: +MSA = +Multi-SentAugment; +MSA FILT = +Multi-SentAugment filtered by teacher model confidence; +LA = +LayerAgg; +LA +MSA = +LayerAgg +Multi-SentAugment; +LA +MSA FILT = +LayerAgg +Multi-SentAugment filtered by teacher model confidence. Highest scores in each task per column in bold. The underlying multilingual model is mBERT.Intent classification (Accuracy × 100) Slot labelling (Slot F1 × 100)Target language
de
es
fr
hi
ja
pt
tr
zh
AVG
Joint
98.65
97.76
97.87
88.26
95.97
97.98
84.26
94.66
94.43
+MSA
98.54
97.54
98.21
88.71
96.42
97.09
82.41
94.49
94.18
+MSA FILT
98.43
96.64
97.87
88.94
96.75
97.65
85.82
94.83
94.62
+LA
98.88
96.65
98.54
91.67
96.64
97.42
83.97
96.98
95.09
+LA +MSA
98.77
97.54
98.54
88.72
96.64
98.10
84.40
96.86
94.95
+LA +MSA FILT 98.66
97.31
97.65
91.76
96.75
97.42
82.84
96.98
94.92
Joint
94.02
85.37
88.26
78.11
91.01
91.05
64.14
91.41
85.42
+MSA
94.02
85.05
89.39
80.45
88.35
91.06
73.32
90.93
86.57
+MSA FILT
93.65
85.12
88.77
80.78
90.56
90.99
67.41
91.67
86.12
+LA
94.26
85.73
89.02
80.92
92.03
90.77
71.09
92.33
87.02
+LA +MSA
93.16
85.69
89.10
81.97
92.24
91.36
70.14
91.59
86.91
+LA +MSA FILT 93.86
85.96
88.68
80.82
91.81
90.87
69.29
92.52
86.72
Intent classification (Accuracy × 100)Target language de
es
fr
hi
ja
pt
tr
zh
AVG
Joint
96.19
94.63
96.08
63.74
78.28
95.07
60.00
93.06
84.63
+MSA
92.72
92.50
94.40
69.90
81.64
93.62
64.26
89.14
84.77
+MSA FILT
97.20
96.87
97.31
77.77
79.28
95.19
61.14
90.37
86.89
Slot labelling (Slot F1 × 100)
Joint
83.31
77.66
79.95
67.00
72.32
82.5
62.66
75.19
75.08
+MSA
80.12
75.81
79.24
69.64
65.86
82.72
62.81
74.46
73.83
+MSA FILT
83.16
79.25
78.62
70.49
74.30
81.22
62.39
72.08
75.19
Table 12 :
125-shot results of Multi-SentAugment on MultiATIS++. Acronyms: +MSA = +Multi-SentAugment; +MSA FILT = +Multi-SentAugment filtered by teacher model confidence. Highest scores in each task per column in bold. The underlying multilingual model is mBERT. Slot labelling (Slot F1 × 100)Target language de
es
fr
hi
ja
pt
tr
zh
AVG
Intent classification (Accuracy × 100)
Joint
97.54
89.81
97.65
84.38
88.80
92.05
77.30
87.46
89.37
+MSA
97.65
95.97
98.43
80.96
84.43
95.41
76.03
93.62
90.31
+MSA FILT
97.09
91.15
98.10
87.57
85.14
96.53
78.30
84.99
89.86
Joint
88.93
84.03
85.63
73.15
82.12
85.09
72.88
78.05
81.24
+MSA
87.99
82.41
84.03
74.99
82.38
85.37
71.91
83.59
81.58
+MSA FILT
88.94
81.79
84.00
76.56
81.83
83.74
72.08
84.13
81.63
Table 13 :
1320-shot results of Multi-SentAugment on MultiATIS++. Acronyms: +MSA = +Multi-SentAugment; +MSA FILT = +Multi-SentAugment filtered by teacher model confidence. Highest scores in each task per column in bold. The underlying multilingual model is mBERT.E Impact of Sentence Encoder
(+LayerAgg Variant)
Model
F
hi
ja
tr
AVG
Intent classification (Acc times 100)
Full-data LASER 88.71 96.64 84.40 89.92
LaBSE 90.08 96.98 83.55 90.2
Few-shot LASER 84.28 79.73 72.34 78.78
LaBSE 79.93 77.72 77.73 78.46
Slot labelling (Slot F1 times 100)
Full-data LASER 81.97 92.24 70.14 81.45
LaBSE 82.85 91.40 69.62 81.29
Few-shot LASER 72.87 80.99 68.46 74.11
LaBSE 72.68 76.78 72.72 74.06
Table 14 :
14Impact of the chosen multilingual sentence encoder: LASER (Artetxe and Schwenk, 2019) versus LaBSE (Feng et al., 2020) in full-data and few-shot scenarios for intent classification and slot labelling, for the LayerAgg model variant.
UnlikeDu et al. (2021), we do not tune pretrained language models to sentence similarity, but use off-the-shelf pretrained multilingual sentence encoders (Artetxe and Schwenk, Feng et al., 2020;Litschko et al., 2021).2 For instance, 10.07.2021 will be typically identified as date in many languages.
Information about the slots in an utterance could be informative of its intent, and vice versa. For instance, an utterance containing temperature unit slot is more likely to belong to intent find_weather than to intent set_alarm.
http://data.statmt.org/cc-100/
In practice, when we label extracted sentences with the teacher model, we only retain the sentences where the teacher model is confident in its prediction, that is, it assigns the intent class probability p ≥ 0.95.
We suspect that a slight performance drop in few-shot setups for zh and ja mostly stems from some discrepancy in tokenization between MultiATIS++ and CC-100.
AcknowledgementsWe thank the anonymous reviewers for their helpful comments and suggestions. This work is supported by the ERC PoC Grant MultiCon-vAI:: Enabling Multilingual Conversational AI (no. 957356), and a Huawei research donation.
Evaluating multilingual text encoders for unsupervised cross-lingual retrieval. Robert Litschko, Ivan Vulić, Simone Paolo Ponzetto, Goran Glavaš, 10.1007/978-3-030-72113-8_23Advances in Information Retrieval. Springer International PublishingRobert Litschko, Ivan Vulić, Simone Paolo Ponzetto, and Goran Glavaš. 2021. Evaluating multilin- gual text encoders for unsupervised cross-lingual re- trieval. In Advances in Information Retrieval, pages 342-358. Springer International Publishing.
URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. Patrick Littell, David R Mortensen, Ke Lin, Katherine Kairis, Carlisle Turner, Lori Levin, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterValencia, Spain2Short Papers. Association for Computational LinguisticsPatrick Littell, David R. Mortensen, Ke Lin, Kather- ine Kairis, Carlisle Turner, and Lori Levin. 2017. URIEL and lang2vec: Representing languages as typological, geographical, and phylogenetic vectors. In Proceedings of the 15th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Volume 2, Short Papers, pages 8-14, Valencia, Spain. Association for Computational Lin- guistics.
Arabic named entity recognition: What works and what's next. Liyuan Liu, Jingbo Shang, Jiawei Han, 10.18653/v1/W19-4607Proceedings of the Fourth Arabic Natural Language Processing Workshop. the Fourth Arabic Natural Language Processing WorkshopFlorence, ItalyAssociation for Computational LinguisticsLiyuan Liu, Jingbo Shang, and Jiawei Han. 2019a. Arabic named entity recognition: What works and what's next. In Proceedings of the Fourth Arabic Natural Language Processing Workshop, pages 60- 67, Florence, Italy. Association for Computational Linguistics.
Zero-shot cross-lingual dialogue systems with transferable latent variables. Zihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, Pascale Fung, 10.18653/v1/D19-1129Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsZihan Liu, Jamin Shin, Yan Xu, Genta Indra Winata, Peng Xu, Andrea Madotto, and Pascale Fung. 2019b. Zero-shot cross-lingual dialogue systems with trans- ferable latent variables. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1297-1303, Hong Kong, China. Association for Computational Linguistics.
Zero-resource cross-domain named entity recognition. Zihan Liu, Pascale Genta Indra Winata, Fung, 10.18653/v1/2020.repl4nlp-1.1Proceedings of the 5th Workshop on Representation Learning for NLP. the 5th Workshop on Representation Learning for NLPAssociation for Computational LinguisticsOnlineZihan Liu, Genta Indra Winata, and Pascale Fung. 2020. Zero-resource cross-domain named entity recognition. In Proceedings of the 5th Workshop on Representation Learning for NLP, pages 1-6, On- line. Association for Computational Linguistics.
Building a task-oriented dialog system for languages with no training data: the case for Basque. Maddalen López De Lacalle, Xabier Saralegi, Iñaki San Vicente, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationMaddalen López de Lacalle, Xabier Saralegi, and Iñaki San Vicente. 2020. Building a task-oriented dia- log system for languages with no training data: the case for Basque. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 2796-2802, Marseille, France. European Language Resources Association.
Recent neural methods on slot filling and intent classification for task-oriented dialogue systems: A survey. Samuel Louvan, Bernardo Magnini, 10.18653/v1/2020.coling-main.42Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain (OnlineInternational Committee on Computational LinguisticsSamuel Louvan and Bernardo Magnini. 2020a. Re- cent neural methods on slot filling and intent clas- sification for task-oriented dialogue systems: A sur- vey. In Proceedings of the 28th International Con- ference on Computational Linguistics, pages 480- 496, Barcelona, Spain (Online). International Com- mittee on Computational Linguistics.
Simple data augmentation for multilingual nlu in task oriented dialogue systems. Samuel Louvan, Bernardo Magnini, Samuel Louvan and Bernardo Magnini. 2020b. Simple data augmentation for multilingual nlu in task ori- ented dialogue systems.
Simple is better! lightweight data augmentation for low resource slot filling and intent classification. Samuel Louvan, Bernardo Magnini, 33rd Pacific Asia Conference on Language, Information and Computation. Samuel Louvan and Bernardo Magnini. 2020c. Simple is better! lightweight data augmentation for low re- source slot filling and intent classification. In 33rd Pacific Asia Conference on Language, Information and Computation.
Example-driven intent prediction with observers. Shikib Mehri, Mihail Eric, 10.18653/v1/2021.naacl-main.237Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesOnline. Association for Computational LinguisticsShikib Mehri and Mihail Eric. 2021. Example-driven intent prediction with observers. In Proceedings of the 2021 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 2979-2992, Online. Association for Computational Linguistics.
Semantic specialisation of distributional word vector spaces using monolingual and cross-lingual constraints. Nikola Mrkšić, Ivan Vulić, Ó Diarmuid, Ira Séaghdha, Roi Leviant, Milica Reichart, Anna Gašić, Steve Korhonen, Young, Transactions of the ACL. 5Nikola Mrkšić, Ivan Vulić, Diarmuid Ó Séaghdha, Ira Leviant, Roi Reichart, Milica Gašić, Anna Korho- nen, and Steve Young. 2017. Semantic specialisa- tion of distributional word vector spaces using mono- lingual and cross-lingual constraints. Transactions of the ACL, 5:309-324.
Data augmentation for spoken language understanding via pretrained models. Baolin Peng, Chenguang Zhu, Michael Zeng, Jianfeng Gao, arXiv:2004.13952arXiv preprintBaolin Peng, Chenguang Zhu, Michael Zeng, and Jian- feng Gao. 2020. Data augmentation for spoken lan- guage understanding via pretrained models. arXiv preprint arXiv:2004.13952.
XCOPA: A multilingual dataset for causal commonsense reasoning. Goran Edoardo Maria Ponti, Olga Glavaš, Qianchu Majewska, Ivan Liu, Anna Vulić, Korhonen, 10.18653/v1/2020.emnlp-main.185Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsEdoardo Maria Ponti, Goran Glavaš, Olga Majewska, Qianchu Liu, Ivan Vulić, and Anna Korhonen. 2020. XCOPA: A multilingual dataset for causal common- sense reasoning. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 2362-2376, Online. As- sociation for Computational Linguistics.
Libo Qin, Tianbao Xie, Wanxiang Che, Ting Liu, arXiv:2103.030952021. A survey on spoken language understanding: Recent advances and new frontiers. arXiv preprintLibo Qin, Tianbao Xie, Wanxiang Che, and Ting Liu. 2021. A survey on spoken language understanding: Recent advances and new frontiers. arXiv preprint arXiv:2103.03095.
Language models are unsupervised multitask learners. Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, Ilya Sutskever, OpenAI Blog. 81Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2019. Language models are unsupervised multitask learners. OpenAI Blog, 1(8).
Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, Pranav Khaitan, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Abhinav Rastogi, Xiaoxue Zang, Srinivas Sunkara, Raghav Gupta, and Pranav Khaitan. 2020. Towards scalable multi-domain conversational agents: The schema-guided dialogue dataset. In Proceedings of the AAAI Conference on Artificial Intelligence, vol- ume 34, pages 8689-8696.
Crossing the conversational chasm: A primer on multilingual task-oriented dialogue systems. Evgeniia Razumovskaia, Goran Glavaš, Olga Majewska, Anna Korhonen, Ivan Vulić, abs/2104.08570CoRR. Evgeniia Razumovskaia, Goran Glavaš, Olga Majew- ska, Anna Korhonen, and Ivan Vulić. 2021. Cross- ing the conversational chasm: A primer on mul- tilingual task-oriented dialogue systems. CoRR, abs/2104.08570.
Making monolingual sentence embeddings multilingual using knowledge distillation. Nils Reimers, Iryna Gurevych, 10.18653/v1/2020.emnlp-main.365Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsNils Reimers and Iryna Gurevych. 2020. Making monolingual sentence embeddings multilingual us- ing knowledge distillation. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 4512-4525, Online. Association for Computational Linguistics.
A primer in BERTology: What we know about how BERT works. Anna Rogers, Olga Kovaleva, Anna Rumshisky, 10.1162/tacl_a_00349Transactions of the Association for Computational Linguistics. 8Anna Rogers, Olga Kovaleva, and Anna Rumshisky. 2020. A primer in BERTology: What we know about how BERT works. Transactions of the Associ- ation for Computational Linguistics, 8:842-866.
XTREME-R: Towards more challenging and nuanced multilingual evaluation. Sebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Junjie Hu, Dan Garrette, Graham Neubig, Melvin Johnson, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing. the 2021 Conference on Empirical Methods in Natural Language ProcessingDominican RepublicAssociation for Computational LinguisticsOnline and Punta CanaSebastian Ruder, Noah Constant, Jan Botha, Aditya Siddhant, Orhan Firat, Jinlan Fu, Pengfei Liu, Jun- jie Hu, Dan Garrette, Graham Neubig, and Melvin Johnson. 2021. XTREME-R: Towards more chal- lenging and nuanced multilingual evaluation. In Pro- ceedings of the 2021 Conference on Empirical Meth- ods in Natural Language Processing, pages 10215- 10245, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Cross-lingual transfer learning for multilingual task oriented dialog. Sebastian Schuster, Sonal Gupta, Rushin Shah, Mike Lewis, 10.18653/v1/N19-1380Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinnesotaAssociation for Computational Linguistics1MinneapolisSebastian Schuster, Sonal Gupta, Rushin Shah, and Mike Lewis. 2019. Cross-lingual transfer learning for multilingual task oriented dialog. In Proceed- ings of the 2019 Conference of the North American Chapter of the Association for Computational Lin- guistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 3795-3805, Min- neapolis, Minnesota. Association for Computational Linguistics.
Evaluating the cross-lingual effectiveness of massively multilingual neural machine translation. Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Ari, Jason Riesa, Ankur Bapna, Orhan Firat, Karthik Raman, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence34Aditya Siddhant, Melvin Johnson, Henry Tsai, Naveen Ari, Jason Riesa, Ankur Bapna, Orhan Firat, and Karthik Raman. 2020. Evaluating the cross-lingual effectiveness of massively multilingual neural ma- chine translation. In Proceedings of the AAAI Con- ference on Artificial Intelligence, volume 34, pages 8854-8861.
How to fine-tune bert for text classification?. Chi Sun, Xipeng Qiu, Yige Xu, Xuanjing Huang, https:/link.springer.com/chapter/10.1007/978-3-030-32381-3_16China National Conference on Chinese Computational Linguistics. SpringerChi Sun, Xipeng Qiu, Yige Xu, and Xuanjing Huang. 2019. How to fine-tune bert for text classification? In China National Conference on Chinese Computa- tional Linguistics, pages 194-206. Springer.
BERT rediscovers the classical NLP pipeline. Ian Tenney, Dipanjan Das, Ellie Pavlick, 10.18653/v1/P19-1452Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsIan Tenney, Dipanjan Das, and Ellie Pavlick. 2019. BERT rediscovers the classical NLP pipeline. In Proceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4593- 4601, Florence, Italy. Association for Computational Linguistics.
Spoken language understanding: Systems for extracting semantic information from speech. Gokhan Tur, Renato De Mori, John Wiley & SonsGokhan Tur and Renato De Mori. 2011. Spoken lan- guage understanding: Systems for extracting seman- tic information from speech. John Wiley & Sons.
What is left to be understood in atis?. Gokhan Tur, Dilek Hakkani-Tür, Larry Heck, 2010 IEEE Spoken Language Technology Workshop. IEEEGokhan Tur, Dilek Hakkani-Tür, and Larry Heck. 2010. What is left to be understood in atis? In 2010 IEEE Spoken Language Technology Workshop, pages 19- 24. IEEE.
(almost) zero-shot cross-lingual spoken language understanding. Shyam Upadhyay, Manaal Faruqui, Gökhan Tür, Dilek Hakkani-Tür, Larry P Heck, 10.1109/ICASSP.2018.8461905Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)Shyam Upadhyay, Manaal Faruqui, Gökhan Tür, Dilek Hakkani-Tür, and Larry P. Heck. 2018. (almost) zero-shot cross-lingual spoken language understand- ing. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Pro- cessing (ICASSP), pages 6034-6038.
Siti Oryza Khairunnisa, Mamoru Komachi, and Barbara Plank. 2021. From masked language modeling to translation: Non-english auxiliary tasks improve zero-shot spoken language understanding. Rob Van Der Goot, Ibrahim Sharaf, Aizhan Imankulova, Ahmet Üstün, Marija Stepanović, Alan Ramponi, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesRob van der Goot, Ibrahim Sharaf, Aizhan Imankulova, Ahmet Üstün, Marija Stepanović, Alan Ramponi, Siti Oryza Khairunnisa, Mamoru Komachi, and Bar- bara Plank. 2021. From masked language modeling to translation: Non-english auxiliary tasks improve zero-shot spoken language understanding. In Pro- ceedings of the 2021 Conference of the North Amer- ican Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 2479-2497.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, https:/dl.acm.org/doi/abs/10.5555/3295222.3295349Proceedings of the 31st International Conference on Neural Information Processing Systems. the 31st International Conference on Neural Information Processing SystemsAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, pages 6000-6010.
Analyzing multi-head self-attention: Specialized heads do the heavy lifting, the rest can be pruned. Elena Voita, David Talbot, Fedor Moiseev, Rico Sennrich, Ivan Titov, 10.18653/v1/P19-1580Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsElena Voita, David Talbot, Fedor Moiseev, Rico Sen- nrich, and Ivan Titov. 2019. Analyzing multi-head self-attention: Specialized heads do the heavy lift- ing, the rest can be pruned. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5797-5808, Florence, Italy. Association for Computational Linguistics.
Probing pretrained language models for lexical semantics. Ivan Vulić, Maria Edoardo, Robert Ponti, Goran Litschko, Anna Glavaš, Korhonen, 10.18653/v1/2020.emnlp-main.586Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Ivan Vulić, Edoardo Maria Ponti, Robert Litschko, Goran Glavaš, and Anna Korhonen. 2020. Probing pretrained language models for lexical semantics. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 7222-7240, Online. Association for Computa- tional Linguistics.
A survey of joint intent detection and slot-filling models in natural language understanding. Henry Weld, Xiaoqi Huang, Siqi Long, Josiah Poon, Soyeon Caren Han, abs/2101.08091CoRRHenry Weld, Xiaoqi Huang, Siqi Long, Josiah Poon, and Soyeon Caren Han. 2021. A survey of joint in- tent detection and slot-filling models in natural lan- guage understanding. CoRR, abs/2101.08091.
CCNet: Extracting high quality monolingual datasets from web crawl data. Guillaume Wenzek, Marie-Anne Lachaux, Alexis Conneau, Vishrav Chaudhary, Francisco Guzmán, Armand Joulin, Edouard Grave, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationGuillaume Wenzek, Marie-Anne Lachaux, Alexis Con- neau, Vishrav Chaudhary, Francisco Guzmán, Ar- mand Joulin, and Edouard Grave. 2020. CCNet: Extracting high quality monolingual datasets from web crawl data. In Proceedings of the 12th Lan- guage Resources and Evaluation Conference, pages 4003-4012, Marseille, France. European Language Resources Association.
Exploiting monolingual data at scale for neural machine translation. Lijun Wu, Yiren Wang, Yingce Xia, Tao Qin, Jianhuang Lai, Tie-Yan Liu, 10.18653/v1/D19-1430Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsLijun Wu, Yiren Wang, Yingce Xia, Tao Qin, Jian- huang Lai, and Tie-Yan Liu. 2019. Exploiting mono- lingual data at scale for neural machine translation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4207- 4216, Hong Kong, China. Association for Computa- tional Linguistics.
A cross-domain transferable neural coherence model. Peng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, Jackie Chi Kit Cheung, 10.18653/v1/P19-1067Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsPeng Xu, Hamidreza Saghir, Jin Sung Kang, Teng Long, Avishek Joey Bose, Yanshuai Cao, and Jackie Chi Kit Cheung. 2019. A cross-domain transfer- able neural coherence model. In Proceedings of the 57th Annual Meeting of the Association for Compu- tational Linguistics, pages 678-687, Florence, Italy. Association for Computational Linguistics.
End-to-end slot alignment and recognition for crosslingual NLU. Weijia Xu, Batool Haider, Saab Mansour, 10.18653/v1/2020.emnlp-main.410Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsWeijia Xu, Batool Haider, and Saab Mansour. 2020. End-to-end slot alignment and recognition for cross- lingual NLU. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing (EMNLP), pages 5052-5063, Online. As- sociation for Computational Linguistics.
Joint slot filling and intent detection via capsule neural networks. Chenwei Zhang, Yaliang Li, Nan Du, Wei Fan, Philip Yu, 10.18653/v1/P19-1519Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsChenwei Zhang, Yaliang Li, Nan Du, Wei Fan, and Philip Yu. 2019. Joint slot filling and intent detec- tion via capsule neural networks. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 5259-5267, Florence, Italy. Association for Computational Linguistics.
A closer look at few-shot crosslingual transfer: The choice of shots matters. Mengjie Zhao, Yi Zhu, Ehsan Shareghi, Ivan Vulić, Roi Reichart, Anna Korhonen, Hinrich Schütze, 10.18653/v1/2021.acl-long.447Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingOnlineAssociation for Computational Linguistics1Mengjie Zhao, Yi Zhu, Ehsan Shareghi, Ivan Vulić, Roi Reichart, Anna Korhonen, and Hinrich Schütze. 2021. A closer look at few-shot crosslingual trans- fer: The choice of shots matters. In Proceedings of the 59th Annual Meeting of the Association for Com- putational Linguistics and the 11th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 5751-5767, Online. Association for Computational Linguistics.
Data augmentation with atomic templates for spoken language understanding. Zijian Zhao, Su Zhu, Kai Yu, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Zijian Zhao, Su Zhu, and Kai Yu. 2019. Data augmen- tation with atomic templates for spoken language understanding. In Proceedings of the 2019 Confer- ence on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP), pages 3628-3634. |
7,403,520 | A Formal Scheme for Multimodal Grammars | We present in this paper a formal approach for the representation of multimodal information. This approach, thanks to the to use of typed feature structures and hypergraphs, generalizes existing ones (typically annotation graphs) in several ways. It first proposes an homogenous representation of different types of information (nodes and relations) coming from different domains (speech, gestures). Second, it makes it possible to specify constraints representing the interaction between the different modalities, in the perspective of developing multimodal grammars. | [
18212263,
17632922,
5349301,
16382303
] | A Formal Scheme for Multimodal Grammars
August 2010
Philippe Blache blache@lpl-aix.fr
LPL-CNRS
Université de Provence
Laurent Prévot
LPL-CNRS
Université de Provence
A Formal Scheme for Multimodal Grammars
Coling 2010: Poster Volume
BeijingAugust 2010
We present in this paper a formal approach for the representation of multimodal information. This approach, thanks to the to use of typed feature structures and hypergraphs, generalizes existing ones (typically annotation graphs) in several ways. It first proposes an homogenous representation of different types of information (nodes and relations) coming from different domains (speech, gestures). Second, it makes it possible to specify constraints representing the interaction between the different modalities, in the perspective of developing multimodal grammars.
Introduction
Multimodality became in the last decade an important challenge for natural language processing. Among the problems we are faced with in this domain, one important is the understanding of how does the different modalities interact in order to produce meaning. Addressing this question requires to collect data (building corpora), to describe them (enriching corpora with annotations) and to organize systematically this information into a homogeneous framework in order to produce, ideally, multimodal grammars.
Many international projects address this question from different perspectives: data representation and coding schemes (cf. ISLE (Dybkjaer, 2001), MUMIN (Allwood, 2005), etc.), corpus annotation (cf. LUNA (Rodriguez, 2007) or DIME (Pineda, 2000), etc.), annotation and editing tools (such as NITE NXT (Carletta, 2003), Anvil (Kipp, 2001), Elan (Wittenburg, 2006), Praat (Boersma, 2009), etc.).
We propose in this paper a generic approach addressing both formal representation and concrete annotation of multimodal data, that relies on typed-feature structure (TFS), used as a description language on graphs. This approach is generic in the sense that it answers to different needs: it provides at the same time a formalism directly usable for corpus annotation and a description language making it possible to specify constraints that constitute the core of a multimodal grammar.
In the first section, we motivate the use of TFS and present how to concretely implement them for multimodal annotation. We address in the second section one of the most problematic question for multimodal studies: how to represent and implement the relations between the different domains and modalities (a simple answer in terms of time alignment being not powerful enough). In the last section, we describe how to make use of this representation in order to specify multimodal grammars.
Typed-feature structures modeling
Information representation is organized in two dimensions: type hierarchies and constituency relations (typically, a prosodic unit is a set of syllables, which in turn are sets of phonemes). The former corresponds to an is-a relation, the latter to a part-of one. For example intonational phrase is a subtype of prosodic phrase, and phonemes are constituents of syllables.
Such an organization is directly represented by means of typed feature structures. They can be considered as a formal annotation schema, used as a preliminary step before the definition of the concrete coding scheme 1 . This step is necessary when bringing together information (and experts) from different fields: it constitutes a common representation framework, homogenizing information representation. Moreover, it allows to clearly distinguish between knowledge representation and annotation. The coding scheme, at the annotation level (labels, features, values), is deduced from this formal level.
The remaining of the section illustrates how to represent objects from different domains by means of TFS. The Figure 1 presents the type hierarchy and the constituency structure of objects taken here as example.
Phonetics
The phoneme is used as primary data: this object is at the lowest level of the constituent hierarchy (most of the objects are set of phonemes). The following feature structure proposes a precise encoding of the main properties describing a phoneme, including articulatory gestures.
phon SAMPA_LABEL sampa_unit CAT vowel, consonant TYPE occlusive, fricative, nasal, etc. ARTICULATION LIP PROTUSION string APERTURE aperture TONGUE TIP LOCATION string DEGREE string BODY LOCATION string DEGREE string VELUM aperture GLOTTIS aperture ROLE EPENTHETIC boolean LIAISON boolean
Phonemes being at the lowest level, they do not have any constituents. They are not organized into precise subtypes. The feature structure represent then the total information associated with this type.
Prosody
As seen above, prosodic phrases are of two different subtypes: ap (accentual phrases) and ip (intonational phrases). The prosodic type hierarchy is represented as follows:
1 This approach has been first defined and experimented in the XXXX project, not cited for anonymity reasons. Accentual phrases have two appropriate features: the label which is simply the name of the corresponding type, and the list of constituents, in this case a list of syllables. The objects of type ip contain the list of its constituents (a set of aps) as well as the description of its contour. A contour is a prosodic event, situated at the end of the ip and is usually associated to an ap.
The prosodic phrases are defined as set of syllables. They are described by several appropriate features: the syllable structure, its position in the word, its possibility to be accented or prominent:
syl STRUCT syl_struct POSITION RANK integer SYL_NUMBER integer ACCENTUABLE boolean PROMINENCE boolean CONSTITUENTS list(const_syl)
Syllable constituents (objects of type const_syl) are described by two different features: the set of phonemes (syllable constituents), and the type of the constituent (onset, nucleus and coda). Note that each syllable constituent can contain a set of phonemes.
const_syl PHON list(phon) CONST_TYPE onset, nucleus, coda
Disfluencies
We can distinguish two kinds of disfluencies: non lexicalized (without any lexical material, such as lengthening, silent pauses or filled pauses) and lexicalized (non-voluntary break in the phrasal flow, generating a word or a phrase fragment). Lexicalized disfluencies have a particular organization with three subparts (or constituents):
• Reparandum: the word or phrase fragment, in which the break occurs
• Break: a point or an interval that can eventually be filled by a fragment repetition, parenthetical elements, etc.
Gestures
Besides verbal communication, gestures constitute the main aspect of multimodality. In multimodal annotation, this is probably the most difficult and time-consuming task. Moreover, only few works really focus on a precise description of all the different domains of verbal and non verbal modalities. The TFS-based approach proposed here answers to the first need in such a perspective: a common representation framework. We give in this section a brief illustration of the representation of one gesture (hands). It relies on adaptation of different proposals, especially (Kipp03) or MUMIN (Allwood, 2005), both integrating McNeill's gesture description (Mc-Neill05).
The following structure encodes the description of gesture phases, phrases (representing different semiotic types), the hand shape as well as its orientation, the gesture space, and the possible contact with bodies or objects. A last feature also describes the movement itself: trajectory, quality (fast, normal or slow) and amplitude (small, medium and large).
hands_type SYMMETRY boolean PHASE Phase_Type PHRASE SEMIOTIC Type Semiotic_Type EMBLEM Emblem_Type DEICTIC Deictic_Type METAPHORIC Metaphoric_Type PASSIVE_HAND boolean ACTIVE_HAND boolean ICONIC Iconic_Type HANDSHAPE SHAPE HandShape_Type LAX boolean GESTURESPACE Space_Type ORIENTATION Orientation_Type CONTACT ADAPTOR Adaptor_Type CONTACT PART Contact_Type MOVEMENT TRAJECTORY Trajectory_Type AMPLITUDE Amplitude_Type QUALITY quality_Type
Application
We have experimented this modeling in the complete annotation of a multimodal corpus (see (Blache, 2010)). In this project, a complete TFS model has been first designed, covering all the different domains (prosody, syntax, gestures, discourse, etc.). From this model, the annotations have been created, leading to a 3-hours corpus of narrative dialogs, fully transcribed. The corpus is fully annotated for some domains (phonetics, prosody and syntax) and partly for others (gestures, discourse, disfluencies, specific phenomena). The result is one of the first large annotated multimodal corpus.
Graphs for Multimodal Annotation
Graphs are frequently used in the representation of complex information, which is the case with multimodality. As for linguistic annotation, one of the most popular representations is Annotation Graphs (Bird, 2001). They have been proposed in particular in the perspective of anchoring different kinds of information in the same reference, making it possible to align them 2 . In AGs, nodes represent positions in the signal while edges bear linguistic information. Two edges connecting the same nodes are aligned: they specify different information on the same part of the input. Implicitly, this means that these edges bear different features of the same object. Such a representation constitutes the basis of different approaches aiming at elaborating generic annotation formats, for example LAF (and its extension GrAF (Ide, 2007)). In this proposal, edge labels can be considered as nodes in order to build higher level information. One can consider the result as an hypergraph, in which nodes can be subgraphs.
We propose in this section a more generalized representation in which nodes are not positions in the signal, but represent directly objects (or set of objects). All nodes have here the same structure, being them nodes or hypernodes. The main interest of this proposal, on top of having an homogeneous representation, is the possibility to anchor information in different references (temporal, spatial or semantic).
Nodes
As seen above, multimodal annotation requires the representation of different kinds of information (speech signal, video input, word strings, images, etc.). The objects 3 that will be used in the description (or the annotation) of the input are of different nature: temporal or spatial, concrete or abstract, visual or acoustic, etc. A generic description requires first a unique way of locating (or indexing) all objects, whatever their domain. In this perspective, an index (in the HPSG sense) can be specified, relying on different information:
• LOCATION: objects can in most of the cases be localized in reference to a temporal or a spatial situation. For example, phonemes have a temporal reference into the speech signal, physical objects have spatial localization that can be absolute (spatial coordinates), or relative (with respect to other objects).
• REALIZATION: data can either refer to concrete or physical objects (phonemes, gestures, referential elements, etc.) as well as abstract ones (concepts, emotions, etc.).
• MEDIUM: specification of the different modalities: acoustic, tactile and visual. 4
• ACCESSIBILITY: some data are directly accessible from the signal or the discourse, they have a physical existence or have already been mentioned. In this case, they are said to be "given" (e.g. gestures, sounds, physical objects). Some other kinds of data are deduced from the context, typically the abstract ones. They are considered as "accessible". A generic structure node can be given, gathering the index and the some other object properties.
node ID DOMAIN prosody, syntax, pragmatics, ... INDEX LOCATION TEMPORAL START value END value SPATIAL coord REALIZATION concrete, abstract MEDIUM acoustic, tactile, visual ACCESSIBILITY given, accessible FEATURES object_type
This structure relies on the different information. Besides INDEX, some other features complete the description:
• ID: using an absolute ID is useful in the perspective of graph representation, in which nodes can encode any kind of information (atomic or complex, including subgraphs).
• DOMAIN: specification of the domain to which the information belongs. This feature is useful in the specification of generic interaction constraints between domains. The following examples illustrate the representation of atomic nodes from different domains: a phoneme (node n1) and a gesture (node n2), that are temporally anchored, and a physical object (node n3) which is spatially situated. This last object can be used as a referent, for example by a deictic gesture.
ID n1 DOMAIN phonetics INDEX TEMP START 285 END 312 REALIZATION concrete MEDIUM acoustic ACCESSIBILITY given FEATURES phoneme LABEL /u/ CAT vowel ... ID n2 DOMAIN gesture INDEX TEMP START 200 END 422 ... FEAT hand PHRASE deictic ORIENTATION front ... ID n3 DOMAIN context INDEX LOC | SPATIAL <x=242, y=422, z=312 > FEATURES discourse_referent SEM book' COLOR red ...
Relations
Linguistic information is usually defined in terms of relations between (sets of) objects, which can be atomic or complex. For example, a phrase is defined by syntactic relations (government, agreement, linearity, etc.) between its constituents. In some cases, these relations can concern objects from the same domain (e.g. syntax in the previous example). In other cases, different domains can be involved. For example, a long break (greater than 200ms) usually precedes a left corner of a new phrase.
The nature of the relation can also be different according to the kind of information to be encoded. Many relations are binary and oriented (precedence, dependency, etc.). Some others only consists in gathering different objects. A construction (in the sense of Construction Grammars, see (Fillmore96)) is precisely that: a set of object or properties that, put together, form a specific phenomenon. It is then useful in our representation to distinguish between oriented relations and set relations. Oriented relations (for example precedence) connect a source and a target, that can be eventually formed with set of objects. Set relations are used to gather a set of objects, without orientation or order (e.g. the constituency relation).
On top of this distinction, it is also necessary to give an index to the relations, in order to make their reference possible by other objects. As for nodes, an index is used, even though its form is simple and does not need a complex anchor. Finally, for the same reasons as for nodes, the specification of the domain is necessary. The following feature structure gives a first view of this organization:
relation INDEX DOMAIN prosody, syntax, pragmatics, ... REL_TYPE ORIENTED_REL SOURCE index TARGET index SET_REL node list
Besides these information, a relation description has to be completed with other information:
• TYPE: different types of relations can be implemented in such representation, such as dependency, precedence, constituency, anaphore, etc.
• SCOPE: a relation can be specific to a construction or at the opposite valid whatever the context. For example, the precedence relation [V ≺ Clit [nom] ] is only valid in the context of interrogative constructions whereas the relation exluding the realization of a backchannel 5 after a connective is valid whatever the context. We distinguish then between local and global scopes.
• POLARITY: a relation can be negated, implementing the impossibility of a relation in a given context.
• CONSTRUCTION: in the case of a local relation, it is necessary to specify the construction to which it belongs.
• STRENGTH: some relation are mandatory, some other optional. As for constraints, we distinguish then between hard and soft relations, depending on their status.
Finally, a last property has to be precisely defined: the synchronization between two objects coming from different domains (for example gestures and words). In some cases, both objects have to be strictly aligned, with same boundaries. For example, a syllable has to be strictly aligned with its set of phonemes: the left syllable boundary (resp. the right) has to be the same as that of the first syllable phoneme (resp. the last). In other cases, the synchronization must not be strict. For example, a deictic gesture is not necessarily strictly aligned with a referential pronoun. In this case, boundaries of both objects only have to be roughly in the same part of the signal.
We propose the definition of alignment operators adapted from (Allen, 1985) This set of operators allow to specify alignment equations between different objects. The advantage of this mechanism is that an equation system can describe complex cases of synchronization. For example, a construction can involve several objects from different domains. Some of these objects can be strictly aligned, some others not.
The final TFS representation is as follows:
relation INDEX DOMAIN
The following feature structure shows an example of a global relation indicating that a verbal nucleus usually comes with a minor raising of the intonation (only main features are indicated here). This information is represented by an implication relation, which is oriented from the syntactic category to the prosodic phenomenon. Alignment equations stipulate a strict synchronization between object.
relation INDEX REL_TYPE | ORIENTED_REL SOURCE VN1 TARGET mr2 TYPE implication STRENGTH soft ALIGNMENT lb1=lb2; rb1=rb2
Representation with Hypergraphs
Nodes and relations can be combined and form higher level nodes, representing constructions which are a set of objects (the constituents) plus a set of relations between them. Such nodes are in fact hypernodes and bear two kinds of information: the properties characterizing the object plus a set of relations between the constituents (representing a subgraph). In the syntactic domain, for example, they represent phrases, as follows:
DOMAIN syntax INDEX | LOCATION | TEMPORAL START 122 END 584 FEATURES CAT VP RELATIONS INDEX r1 REL_TYPE | SET_REL V, NP, Adv TYPE constituency STRENGTH hard ; INDEX r2 REL_TYPE | ORIENTED_REL SOURCE NP TARGET V TYPE dependency STRENGTH hard
In the same way, the interaction between different objects from different domains can involve several relations. For example, a deictic construction can be made of the conjunction of an anaphoric pronoun, a deictic gesture and a physical object (for example a book on a shelf). Such a construction can be described by the following structure:
INDEX | LOCATION | TEMPORAL START 841 END 1520 FEATURES SEM book' RELATIONS INDEX r3 SET_REL Pro1, Dx_gest2, Ph_object3 TYPE constituency ALIGNMENT lb1 ≈∆lb2; rb1 ≈∆rb2 ; INDEX r4 ORIENTED_REL SOURCE Pro1 TARGET Ph_object3 TYPE reference
This construction indicates some properties (limited here to the semantic value) and two re-lations between the different objects: one constituency, indicating the different objects involved in the construction and their (fuzzy) alignment and a reference relation between the pronoun and a physical object (here, a book).
This structure represents an hypergraph: it is a graph connecting different nodes, each of them being to its turn described by another graph, as shown above. The main interest of such a representation is its flexibility: all kinds of information can be described, at any level. Graphs being less constrained than trees, and edges (or relations) being typed, we can gather different levels, different domains and different granularities. For example, an agreement relation can be specified thanks to the deictic construction, besides the constituency one, making it possible to instanciate the agreement value of the pronoun.
Note that hypergraphs are also investigated in other knowledge representation, their properties are well known (Hayes, 2004) and the implementation of specific hypergraphs as the one presented here could be done in RDF graphs for example as suggested in (Cassidy, 2010).
Constraints for Multimodal Grammars
In the same way as typed feature structures can implement constraints and constitute a description language on linguistic structures (cf. HPSG, ), the same approach can be generalized to multimodal information. SOme recent works have been done in this direction (see (Alahverdzhieva, 2010;?)). The representation we propose can implement generic information about multimodal constructions. We illustrate in the following this aspect with two phenomena: backchannels and dislocation.
Several studies on conversational data (see for example (Bertrand09)) have described backchannels (that can be vocal or gestual) and their context. They have in particular underline some regularities on the left context:
• backchannels usually follow: major intonative phrases (IP), flat contours, end of conversational turn (i.e. saturated from a semantic, syntactic and pragmatic point of view)
• backchannels never appear after connectives These constraints can be implemented by means of a feature structure (representing an hypernode) with a set of precedence relations. The different objects involved in the description of the phenomenon (IP, flat contour, conversational turn, connective) are indicated with an indexed ID, referring to their complete feature structure, not presented here.
ID 1 DOMAIN pragmatics FEATURES TYPE 2 RELATIONS INDEX r5 SET_REL IP 3 , FLAT_CONTOUR 4 , CONV_TURN 5 , CONNECTIVE 6 TYPE constituency ; INDEX r6 ORIENTED_REL SOURCE 3 , 4 , 5 TARGET 1 TYPE precedence ; INDEX r7 ORIENTED_REL SOURCE 6 TARGET 1 TYPE precedence POLARITY minus INDEX r8 ORIENTED_REL SOURCE 3 TARGET vocal_ 2 TYPE precedence STRENGTH hard
Figure 2: Backchannel Constraint
This structure (cf. Figure 2) represents a constraint that backchannels have to satisfy. The first relation specifies the constituents and their indexes, with which the different precedence constraints are represented. The relation r6 indicates all kinds of object that should precede a backchannel. This constraint subsumes the most specific relation r8 stipulating that a vocal backchannel is always preceded with an IP (this is a hard constraint). The relation r7 excludes the possibility for a backchannel to be preceded with a connective.
The second example (cf. Figure 3) proposes a constraint system describing dislocated structures. We propose in this description to distinguish two syntactic constituents that form the two parts of the dislocation: the dislocated phrase (called S1) and the sentence from which the phrase has been extracted (called S2). Usually (even if not always), S2 contains a clitic referring to S1. We note in the following this clitic with the notation S2//Clit. For readability reasons, we only present in this structure the relations.
This structure describes the case of a left dislocation (with S1 preceding S2, the constraint being hard). In such cases, S1 is usually realized with a minor raising contour. The constraint r13 implements the anaphoric relation between the clitic and the dislocated element. Finally, the relation r14 indicates an agreement relation between the clitic and S1 and in particular the fact that the case has to be the same for both objects.
DOMAIN syntax RELATIONS INDEX
Conclusion
Linguistic annotation in general, and multimodality in particular, requires high level annotation schemes making it possible to represent in an homogeneous way information coming from the different domains and modalities involved in human communication.
The approach presented in this paper generalizes previous methods (in particular annotation graphs) thanks to two proposals: first in providing a way to index objects without strict order relation between nodes and second in specifying a precise and homogeneous representation of the objects and their relations. This approach has been developed into a formal scheme, typed feature structures, in which all the different domains can be represented, and making it possible to implement directly hypergraphs. TFS and hypergraphs are particularly well adapted for the specification of interaction constraints, describing interaction relations between modalities. Such constraints constitute the core of the definition of future multimodal grammars.
From a practical point of view, the proposal described in this paper is currently under experimentation within the OTIM project (see (Blache, 2010)). An XML scheme has been automatically generated starting from TFS formal scheme. The existing multimodal annotations, created with ad hoc annotation schemes, are to their turn automatically translated following this format. We obtain then, for the first time, a large annotated multimodal corpus, using an XML schema based on a formal specification.
Figure 1 :
1Type and constituent hierarchies
Figure 3 :
3Dislocation Constraint
as follows:=
same
boundaries have to be equal
< ∆ before b1 <∆ b2 means b1 value is lower
than b2, with b2 − b1 ≤ ∆
> ∆ after
b1 >∆ b2 means that the boundary
b1 follows b2, with b1 − b2 ≤ ∆
≈ ∆ almost
boundaries are neighbors, without
order relation, with | b1 − b2 |≤ ∆
prosody, syntax, pragmatics, ...REL_TYPE
ORIENTED_REL
SOURCE index
TARGET index
SET_REL node list
TYPE dependency, precedence, etc.
SCOPE global, local
POLARITY plus, minus
CONSTRUCTION contruction_type
STRENGTH hard, soft
ALIGNMENT alignment_equations
r11 SET_REL
r11S1 1 , S2 2 , MINOR_RAISING 3 , S2//CLIT 4TYPE constituency
;
INDEX r12
ORIENTED_REL
SOURCE 1
TARGET 2
TYPE precedence
;
INDEX r13
ORIENTED_REL
SOURCE 1
TARGET 4
TYPE anaphor
INDEX r14
ORIENTED_REL
SOURCE 1 [CASE 3 ]
TARGET 4 [CASE 3 ]
TYPE agreement
Another important interest of AGs is that they can constitute the basis for an exchange format, when thinking on annotation tools interoperability (a proposal is currently elaborated under auspices of the MITRE program, see http://www.mitre.org/).3 We call object any annotation that participates to the description: phoneme, words, gestures, but also phrases, emotions, etc.
A backchannel is a reaction, verbal or gestual, of the adressee during a conversation.
Analysing Language and Co-verbal Gesture and Constraint-based Grammars. K Alahverdzhieva, A Lascarides, Proceedings of the 17th International Conference on Head-Driven Phase Structure Grammar. the 17th International Conference on Head-Driven Phase Structure GrammarAlahverdzhieva, K. and A. Lascarides (2010) "Analysing Language and Co-verbal Gesture and Constraint-based Grammars", in Proceedings of the 17th International Conference on Head-Driven Phase Structure Grammar.
A common-sense theory of time. F Allen, P J Hayes, 9th International Joint Conference on Artificial Intelligence. Allen F. and P. J. Hayes (1985) "A common-sense the- ory of time", in 9th International Joint Conference on Artificial Intelligence.
The MUMIN Multimodal Coding Scheme. J Allwood, L Cerrato, L Dybkjaer, NorFA yearbook. Allwood J., L. Cerrato, L. Dybkjaer and al. (2005) The MUMIN Multimodal Coding Scheme, NorFA yearbook 2005
Représentation, édition et exploitation de données multimodales : le cas des backchannels du corpus CID. R Bertrand, M Ader, P Blache, G Ferré, R Espesser, S Rauzy, 33in Cahiers de linguistique françaiseBertrand R., M. Ader, P. Blache, G. Ferré, R. Es- pesser, S. Rauzy (2009) "Représentation, édition et exploitation de données multimodales : le cas des backchannels du corpus CID", in Cahiers de lin- guistique française, 33:2.
Creating and Exploiting Multimodal Annotated Corpora: The ToMA Project. P Blache, R Bertrand, G Ferré, Multimodal Corpora: From Models of Natural Interaction to Systems and Applications, LNAI 5509. Kipp, Martin, Paggio and HeylenSpringerBlache P., R. Bertrand, and G. Ferré (2009) "Creat- ing and Exploiting Multimodal Annotated Corpora: The ToMA Project". in Kipp, Martin, Paggio and Heylen (eds.) Multimodal Corpora: From Models of Natural Interaction to Systems and Applications, LNAI 5509, Springer.
Multimodal Annotation of Conversational Data. P Blache, proceedings of LAW-IV -The Linguistic Annotation Workshop. LAW-IV -The Linguistic Annotation WorkshopBlache P. et al. (2010) "Multimodal Annotation of Conversational Data", in proceedings of LAW-IV - The Linguistic Annotation Workshop
ATLAS : A Flexible and Extensible Architecture for Linguistic Annotation. S Bird, D Day, J Garofolo, J Henderson, C Laprun, M Liberman, in procs of LREC00Bird S., Day D., Garofolo J., Henderson J., Laprun C. & Liberman M. (2000) "ATLAS : A Flexible and Extensible Architecture for Linguistic Annotation", in procs of LREC00
A formal framework for linguistic annotation. S Bird, M Liberman, Speech Communication. ElsevierBird S., M. Liberman (2001) "A formal framework for linguistic annotation" Speech Communication, Elsevier
P & D Boersma, Weenink, Praat: doing phonetics by computer. Boersma P. & D. Weenink (2009) Praat: doing pho- netics by computer, http://www.praat.org/
The NITE Object Model Library for Handling Structured Linguistic Annotation on Multimodal Data Sets. J Carletta, J Kilgour, T O'donnell, in procs of the EACL Workshop on Language Technology and the Semantic WebCarletta, J., J. Kilgour, and T. O'Donnell (2003) "The NITE Object Model Library for Handling Struc- tured Linguistic Annotation on Multimodal Data Sets" in procs of the EACL Workshop on Language Technology and the Semantic Web
The Logic of Typed Feature Structures. B Carpenter, Cambridge University PressCarpenter B. (1992) The Logic of Typed Feature Structures. Cambridge University Press.
An RDF Realisation of LAF in the DADA Annotation Server. S Cassidy, Proceedings of ISA-5. ISA-5Hong KongCassidy S. (2010) An RDF Realisation of LAF in the DADA Annotation Server. Proceedings of ISA-5, Hong Kong, January 2010.
Information Structure in Cross-Linguistic Corpora: Annotation Guidelines for Phonology, Morphology, Syntax, Semantics and Information Structure. Working Papers of the SFB. Dipper S., M. Goetze and S. Skopeteas6327Dipper S., M. Goetze and S. Skopeteas (eds.) (2007) Information Structure in Cross-Linguistic Corpora: Annotation Guidelines for Phonology, Morphol- ogy, Syntax, Semantics and Information Structure, Working Papers of the SFB 632, 7:07
Survey of Existing Tools, Standards and User Needs for Annotation of Natural Interaction and Multimodal Data. L Dybkjaer, S Berman, M Kipp, M Wegener Olsen, V Pirrelli, N Reithinger, C Soria, D11.1ISLE Natural Interactivity and Multimodality Working Group Deliverable. Dybkjaer L., S. Berman, M. Kipp, M. Wegener Olsen, V. Pirrelli, N .Reithinger, C. Soria (2001) "Sur- vey of Existing Tools, Standards and User Needs for Annotation of Natural Interaction and Multimodal Data", ISLE Natural Interactivity and Multimodal- ity Working Group Deliverable D11.1
Construction Grammar, Manuscript, University of California at Berkeley Department of linguistics. C & P Fillmore, Kay, Fillmore C. & P. Kay (1996) Construction Grammar, Manuscript, University of California at Berkeley Department of linguistics.
Meeting structure annotation: Annotations collected with a general purpose toolkit. A Gruenstein, J Niekrasz, M Purver, Recent Trends in Discourse and Dialogue. L. Dybkjaer and W. MinkerSpringer-VerlagGruenstein A., J. Niekrasz, and M. Purver. (2008) "Meeting structure annotation: Annotations col- lected with a general purpose toolkit". In L. Dybk- jaer and W. Minker, editors, Recent Trends in Dis- course and Dialogue, Springer-Verlag.
Bipartite graphs as intermediate model for RDF. J Hayes, C Gutierrez, Proceedings of ISWC 2004, 3rd International Semantic Web Conference (ISWC2004). ISWC 2004, 3rd International Semantic Web Conference (ISWC2004)JapanHayes J. and Gutierrez C. (2004) Bipartite graphs as intermediate model for RDF. Proceedings of ISWC 2004, 3rd International Semantic Web Conference (ISWC2004), Japan.
GrAF: A Graphbased Format for Linguistic Annotations. N Ide, K Suderman, LAW-07proceedings of the Linguistic Annotation Workshop. the Linguistic Annotation WorkshopIde N. and K. Suderman (2007) "GrAF: A Graph- based Format for Linguistic Annotations" in pro- ceedings of the Linguistic Annotation Workshop (LAW-07)
Bridging the Gaps: Interoperability for GrAF, GATE, and UIMA. Proceedings of the Third Linguistic Annotation Workshop. N Ide, K Suderman, held in conjunction with ACL 2009, SingaporeIde N. and Suderman K. (2009) Bridging the Gaps: Interoperability for GrAF, GATE, and UIMA. Pro- ceedings of the Third Linguistic Annotation Work- shop, held in conjunction with ACL 2009, Singa- pore.
Anvil-a generic annotation tool for multimodal dialogue. M Kipp, procs of 7th European Conference on Speech Communication and Technology. s of 7th European Conference on Speech Communication and TechnologyKipp M. (2001) "Anvil-a generic annotation tool for multimodal dialogue" in procs of 7th European Conference on Speech Communication and Tech- nology
Gesture Generation by Immitation: From Human Behavior to Computer Character Animation. M Kipp, Saarland UniversityPhD ThesisKipp, M. (2003) Gesture Generation by Immitation: From Human Behavior to Computer Character An- imation, PhD Thesis, Saarland University.
A Formal Semantic Analysis of Gesture. A Lascarides, M Stone, Journal of Semantics. 264Lascarides, A. and M. Stone (2009) "A Formal Se- mantic Analysis of Gesture", in Journal of Seman- tics, 26(4).
Gesture and Thought, The University of. D Mcneill, Chicago PressMcNeill, D. (2005) Gesture and Thought, The Univer- sity of Chicago Press.
A Model for Multimodal Reference Resolution. L Pineda, G Garza, Computational Linguistics. 26Pineda, L., and G. Garza (2000) "A Model for Mul- timodal Reference Resolution", in Computational Linguistics, Vol. 26 no. 2
Standoff Coordination for Multi-Tool Annotation in a Dialogue Corpus. K Rodriguez, K J Stefan, S Dipper, M Goetze, M Poesio, G Riccardi, C Raymond, J Wisniewska, in procs of the Linguistic Annotation Workshop at the ACL'07 (LAW-07Rodriguez K., Stefan, K. J., Dipper, S., Goetze, M., Poesio, M., Riccardi, G., Raymond, C., Wis- niewska, J. (2007) "Standoff Coordination for Multi-Tool Annotation in a Dialogue Corpus", in procs of the Linguistic Annotation Workshop at the ACL'07 (LAW-07)
Survey of Multimodal Coding Schemes and Best Practice. M Wegener Knudsen, Wegener Knudsen M.and al. (2002) Survey of Multi- modal Coding Schemes and Best Practice, ISLE
ELAN: a Professional Framework for Multimodality Research. P Wittenburg, H Brugman, A Russel, A Klassmann, H Sloetjes, proceedings of LREC. LRECWittenburg, P.; Brugman, H.; Russel, A.; Klassmann, A. and Sloetjes, H. (2006) "ELAN: a Professional Framework for Multimodality Research". In pro- ceedings of LREC 2006 |
32,615,509 | Construction automatique d'un lexique de modifieurs de polarité | La recherche présentée 1 s'inscrit dans le domaine de la fouille d'opinion, domaine qui consiste principalement à déterminer la polarité d'un texte ou d'une phrase. Dans cette optique, le contexte autour d'un mot polarisé joue un rôle essentiel, car il peut modifier la polarité initiale de ce terme. Nous avons choisi d'approfondir cette question et de détecter précisément ces modifieurs de polarité. Une étude exploratoire, décrite dans des travaux antérieurs, nous a permis d'extraire automatiquement des adverbes qui jouent un rôle sur la polarité des adjectifs auxquels ils sont associés et de préciser leur impact. Nous avons ensuite amélioré le système d'extraction afin de construire automatiquement un lexique de structures lexico-syntaxiques modifiantes associées au type d'impact qu'elles ont sur un terme polarisé. Nous présentons ici le fonctionnement du système actuel ainsi que l'évaluation du lexique obtenu.ABSTRACTAutomatic Construction of a Contextual Valence Shifters LexiconThe research presented in this paper takes place in the field of Opinion Mining, which is mainly devoted to assigning a positive or negative label to a text or a sentence. The context of a highly polarized word plays an essential role, as it can modify its original polarity. The present work addresses this issue and focuses on the detection of polarity shifters. In a previous study, we have automatically extracted adverbs impacting the polarity of the adjectives they are associated to and qualified their influence. The extraction system has then been improved to automatically build a lexicon of contextual valence shifters. This lexicon contains lexico-syntactic patterns combined with the type of influence they have on the valence of the polarized item. The purpose of this paper is to show how the current system works and to present the evaluation of the created lexicon. MOTS-CLÉS : fouille d'opinion, modifieurs de valence affective, modifieurs de polarité. | [
141434080,
194052360,
3181362
] | Construction automatique d'un lexique de modifieurs de polarité
2012. 2012. 2012. 27 juin -1 er juillet 2011
Actes De La Conférence Conjointe
Jep-Taln-Recital
Construction automatique d'un lexique de modifieurs de polarité
ATALA & AFCP TALN 2011
Grenoble, 4 au 8 juin; Montpellier32012. 2012. 2012. 27 juin -1 er juillet 2011opinion mining, contextual valence shifters
La recherche présentée 1 s'inscrit dans le domaine de la fouille d'opinion, domaine qui consiste principalement à déterminer la polarité d'un texte ou d'une phrase. Dans cette optique, le contexte autour d'un mot polarisé joue un rôle essentiel, car il peut modifier la polarité initiale de ce terme. Nous avons choisi d'approfondir cette question et de détecter précisément ces modifieurs de polarité. Une étude exploratoire, décrite dans des travaux antérieurs, nous a permis d'extraire automatiquement des adverbes qui jouent un rôle sur la polarité des adjectifs auxquels ils sont associés et de préciser leur impact. Nous avons ensuite amélioré le système d'extraction afin de construire automatiquement un lexique de structures lexico-syntaxiques modifiantes associées au type d'impact qu'elles ont sur un terme polarisé. Nous présentons ici le fonctionnement du système actuel ainsi que l'évaluation du lexique obtenu.ABSTRACTAutomatic Construction of a Contextual Valence Shifters LexiconThe research presented in this paper takes place in the field of Opinion Mining, which is mainly devoted to assigning a positive or negative label to a text or a sentence. The context of a highly polarized word plays an essential role, as it can modify its original polarity. The present work addresses this issue and focuses on the detection of polarity shifters. In a previous study, we have automatically extracted adverbs impacting the polarity of the adjectives they are associated to and qualified their influence. The extraction system has then been improved to automatically build a lexicon of contextual valence shifters. This lexicon contains lexico-syntactic patterns combined with the type of influence they have on the valence of the polarized item. The purpose of this paper is to show how the current system works and to present the evaluation of the created lexicon. MOTS-CLÉS : fouille d'opinion, modifieurs de valence affective, modifieurs de polarité.
Introduction et état de l'art
Le champ de recherche de la fouille d'opinion regroupe des tâches diverses, notamment celle de distinguer le positif du négatif et de définir de cette façon la polarité (ou valence) d'un texte. D'un point de vue terminologique, nous préférerons le terme polarité au terme anglais valence afin d'éviter l'ambiguité avec le concept français de valence en syntaxe. Ces dernières années, les recherches dans ce domaine se sont fortement développées, comme on peut le voir dans la vue d'ensemble donnée par (Pang et Lee, 2008). Les tâches à accomplir se sont diversifiées et spécialisées, selon les contraintes industrielles ou le niveau de précision voulu. Il est progressivement apparu qu'une des entraves importantes à l'efficacité des systèmes de fouille d'opinion était la prise en compte du contexte. En effet, la présence d'un terme négatif dans une phrase, par exemple, ne signifie pas forcément que la phrase est négative. Ce terme peut effectivement être nié, tempéré, intégré dans un contexte hypothétique, etc. Ainsi, de nombreux phénomènes contextuels ont un impact sur un terme polarisé dans un texte. Notre objectif ici est d'identifier et de décrire précisément ces phénomènes.
Etat de l'art
Dans le domaine de la fouille d'opinion, rares sont les travaux dont le sujet central d'étude traite des phénomènes qui ont un impact sur la polarité d'un terme. Zaenen et Polanyi (2004) tentent, dans cette optique, de décrire tous les cas où le contexte modifie un terme polarisé. Leur étude, en anglais, postule l'existence d'éléments contextuels appelés contextual valence shifters qui modifient la valeur initiale d'un terme. Leur hypothèse de travail est que la valence de termes polarisés peut être renforcée ou affaiblie par la présence d'autres items lexicaux, par la structure du discours et le type de texte, ou enfin par des facteurs culturels.
Sur leur impulsion, des travaux de plus en plus nombreux, introduisent dans des systèmes de classification d'opinion, une certaine prise en compte du contexte, plus ou moins complète et riche (Kennedy et Inkpen, 2006;Musat et Trausan-Matu, 2010;Taboada et al., 2011). Les ressources utilisées (comme des listes d'adverbes) sont généralement définies intuitivement. La terminologie anglophone qui traite de ces concepts relativement récents est assez diverse et peu stabilisée. Plusieurs notions sont utilisées, certaines se complètent ou se recouvrent. Ainsi, Zaenen et Polanyi (2004) traitent des concepts de contextual valence shifter, ou modifier et plus précisément de la negation (qui inverse la polarité) et des intensifiers (qui l'intensifient ou l'atténuent). Kennedy et Inkpen (2006) précisent ensuite cette terminologie et divisent les modifieurs en trois types : la negation, les intensifiers (qui ont la seule faculté d'intensification) et les diminishers (qui atténuent la force d'un terme polarisé). Signalons également le concept d'intensifiers défini par (Quirk et al., 1985) comme des éléments qui ont un impact sur l'intensité de la polarité d'un terme. Ils se classent en deux grandes catégories : les amplifiers qui amplifient l'intensité sémantique du voisinnage lexical (very), et les downtoners, qui atténuent cette intensité (slightly). De plus, au-delà de l'idée d'intensité, sont développés également les concepts de polarity influencers (Wilson et al., 2009), non veridical context (Zwarts, 1995;Giannakidou, 1998) ou irrealis markers (Taboada et al., 2011).
Cette terminologie n'est pas, à notre connaissance, développée dans les recherches francophones. Vernier et al. (2009) et Petrakis et al. (2009 prennent en compte, dans certains cas, le contexte autour de termes polarisés ou la combinaison de plusieurs termes polarisés, mais ne reprennent pas le concept de contextual valence shifter tel qu'il est défini plus haut. Nous parlerons ici de modifieur de polarité (aussi appelés modifieur de valence affective), et des notions d'intensifieurs, atténuateurs, et inverseurs, à la suite des travaux de (Zaenen et Polanyi, 2004) et (Kennedy et Inkpen, 2006).
Objectifs
Sur la base des constatations ci-dessus, notre objectif est d'extraire automatiquement, à partir d'un corpus, des phénomènes contextuels qui ont un impact sur des termes polarisés, autrement dit construire de façon automatique une liste de modifieurs à partir d'un corpus. Nous nous limitons ici à l'étude de toutes structures (ou patrons) lexico-syntaxiques dans lesquelles sont intégrés des termes polarisés et qui ont un impact sur la polarité d'un terme. Il s'agira par exemple du syntagme prépositionnel qui associe la préposition sans à un nom, patron lexico-syntaxique qui inverse la polarité du nom. Nos travaux antérieurs ont conduit à la création d'une méthodologie, qui repère des structures de ce type susceptibles d'être des modifieurs de polarité. Cette méthodologie, appliquée à l'extraction d'adverbes (plus précisément de syntagmes adjectivaux modifiés par des adverbes), est décrite dans (Boubel et Bestgen, 2011). Les résultats de cette étude exploratoire permettent de supposer que certaines caractéristiques statistiques peuvent prédire des caractéristiques sémantiques (et prédire donc en particulier l'impact sémantique du modifieur sur un terme polarisé). Une analyse linguistique des adverbes extraits, présentée dans (Boubel, 2011) a ensuite été menée afin de vérifier cette hypothèse. Cette analyse a mis en évidence trois types d'adverbes partageant des caractéristiques statistiques communes. Il est apparu que les adverbes de chaque catégorie remplissent également un rôle sémantique similaire. Trois cas ont ainsi été dégagés 2 : -le modifieur intensifie le terme polarisé auquel il est associé (« (. . . ) le film est absolument jubilatoire. ») ; -le modifieur inverse ou atténue la polarité du terme (« C'est absurde, peu crédible, inintéressant (...). ») ; -le modifieur apparait dans une structure évaluative plus large, comme une comparaison ou une concession, et met souvent en relation plusieurs termes polarisés (« On l'eût aimé moins glacé, plus fiévreux, plus emporté. ») ; ce dernier cas se distingue des précédents car le modifieur n'a pas un impact direct sur un terme précis.
Notre objectif ici est d'améliorer et de perfectionner la méthodologie d'extraction sur deux points :
1. Dépasser le cadre des adverbes : notre première étude a démontré la pertinence des adverbes comme modifieurs potentiels ; nous cherchons maintenant à déterminer dans quelle mesure d'autres catégories syntaxiques peuvent également être modifiantes. Pour cela, nous avons adapté le système à la détection de toutes relations de dépendance syntaxique éventuellement modifiantes.
2. Automatiser le classement des modifieurs : la méthodologie d'extraction ne définit pas, à l'origine, la nature du modifieur (son impact sur le terme polarisé). Nous avons automatisé, dans notre système actuel, le classement des modifieurs dans une des trois catégories définies plus haut.
L'outil fournit donc maintenant en sortie une liste de structures lexico-syntaxiques modifiantes classées en trois groupes. L'objectif principal de l'article est d'évaluer la pertinence des résultats obtenus, et la performance du système. L'évaluation manuelle et systématique effectuée nous permet également de juger de la pertinence de notre catégorisation de modifieurs et de mettre en lumière d'autres phénomènes contextuels intéressants.
Dans la suite de cet article, nous décrivons l'approche adoptée et la méthodologie proposée, avant d'évaluer les résultats et de conclure.
Méthodologie proposée
La méthodologie d'extraction a été développée en collaboration avec Yves Bestgen et est décrite en détail dans (Boubel et Bestgen, 2011). Nous rappelons ici les grandes lignes de l'approche. Nous expliquons ensuite plus en détail l'automatisation du classement des modifieurs grâce à l'ajout de règles basées sur les résultats statistiques.
Approche
Nous nous basons sur deux ressources : un corpus contenant des énoncés dont on connaît la polarité, et un lexique de termes positifs ou négatifs. L'idée de départ de notre approche est de s'intéresser au contexte linguistique des termes issus du lexique. En effet, on peut supposer que l'impact du contexte sera différent selon qu'il porte sur un terme positif ou négatif : (1) dans un texte négatif, (2) dans un texte positif ou (3) dans un texte présentant une opinion mitigée. Nous nous limitons à l'étude de structures lexico-syntaxiques, et étudions en conséquence les relations de dépendance syntaxique mettant en jeu un terme polarisé. L'objectif est de rendre compte, grâce à des techniques statistiques, de l'influence du contexte sur la polarité d'un terme et de dégager les contextes lexico-syntaxiques qui induisent toujours le même impact.
Ressources et outils utilisés
Le corpus utilisé est constitué d'extraits de critiques de films issus du site Allociné 3 . Ce site rassemble, pour un même film, de brefs extraits (pouvant aller d'un syntagme à quelques phrases) d'articles provenant de différents journaux donnant une opinion sur le film. Ces extraits sont classés selon la teneure de l'opinion sur une échelle de 1 à 5 : les avis très négatifs ont la note de 1, et les avis très positifs la note de 5. Le corpus contient 77561 critiques et environ 2 millions de mots.
Le lexique que nous avons utilisé (lexique classant des termes selon leur polarité) a été constitué automatiquement grâce à la méthode de (Vincze et Bestgen, 2011
Règles d'attribution automatique d'étiquettes
Afin de finaliser la procédure d'extraction, nous cherchons à définir automatiquement l'impact des structures lexico-syntaxiques significatives extraites sur un terme polarisé. Dans (Boubel, 2011), notre analyse linguistique nous avait amenée à analyser plus en détail les résultats statistiques. Nous avions ainsi mis en relation les surreprésentations et sousreprésentations caractéristiques de ces adverbes avec leur rôle sémantique. Globalement, trois tendances se sont dégagées (la surreprésentation s'est avérée plus informative que la sousreprésentation) :
1. Les adverbes surreprésentés avec des adjectifs dont la polarité coïncide avec celle de la critique, comme pour l'adverbe profondément (table 3), intensifient souvent la valeur des adjectifs auxquels ils sont associés : « (...) Un film gonflé et profondément attachant (...) ».
2. Les adverbes surreprésentés avec un adjectif dont la polarité ne coïncide pas avec celle de la critique, comme on peut le voir dans la Il est raisonnable de penser que ces conclusions peuvent être valables pour d'autres structures que les adverbes. C'est donc sur la base de l'étude empirique présentée ci-dessus que nous avons défini un ensemble de règles qui attribuent un score à une structure pour chaque type de modifieurs. Nous conférons ainsi un score de 1 à 10 à une structure pour chacune des trois classes de modifieurs, grâce à une dizaine de règles par classe. Ces règles, que nous ne détaillons pas ici pour une question de clarté et de place, se basent sur les propriétés statistiques du modifieur (surreprésentations et sous-représentations) en fonction de deux critères : la polarité du terme et la note de la critique. Ainsi, une structure obtient un score élevé : (1) dans la classe des intensifieurs lorsqu'elle est surreprésentée dans une critique dont la polarité coïncide avec celle du terme polarisé ;
(2) dans la classe des inverseurs lorsqu'elle est surreprésentée dans une critique dont la polarité n'est pas celle du terme polarisé ; (3) dans la classe des concessifs lorsqu'elle est surreprésentée dans les critiques mitigées (note 3/5). Nous classons ensuite la structure dans la catégorie de modifieurs qui obtient le score le plus élevé. De cette façon, le syntagme nominal modifié par l'adjectif total (cf table 2) obtient un score de 8 comme intensifieur, 0 comme inverseur et 2 commme concessif. Cette structure est en effet surreprésentée dans les critiques positives lorsqu'elle est associée à un nom positif, et surreprésentée dans les critiques négatives lorsqu'elle est associée à un nom négatif, ce qui lui confère un score d'intensification élevé. Le syntagme nominal modifié par l'adjectif total est donc répertorié comme intensifieur avec un score de 8. La liste des 37 intensifieurs, dont des exemples sont reportés dans la table 7, est principalement constituée, à plus de 90%, de structures lexico-syntaxiques contenant des adjectifs ou des adverbes. On remarquera toutefois la présence de certaines formes plus complexes, comme le complément du nom "de l'année", qui amplifie clairement la polarité du nom auquel il est associé. Enfin, les 27 concessifs sont en majorité des structures contenant des adverbes, mais d'autres structures plus diverses apparaissent, de la même façon que pour les inverseurs. Là encore, les adjectifs sont peu nombreux (au nombre de 3 : certain, inégal, même). Au vu de la liste à juger, nous sommes amenée à reconsidérer quelque peu la définition de cette catégorie. On y trouve en effet des éléments très divers, mais qui expriment tous, et grâce à des stratégies plus ou moins directes, un avis mitigé, une opinion nuancée, ou une hésitation, comme on peut l'entrevoir dans les exemples de la table 9. Ainsi, les constructions plus complexes (vb -sans déplaisir, finir par -vb) font référence à des stratégies qui expriment un avis mitigé ou peu enthousiaste. 3. Adjectifs : Les adjectifs sont souvent utilisés comme les indices principaux d'une polarité. L'extraction de nombreux adjectifs ici montre qu'ils peuvent aussi être des modifieurs, en particulier pour l'intensification. Pour résumer, plus de 50% des modifieurs jugés pertinents correspondent à des structures contenant un adverbe (associé à un adjectif ou à un verbe polarisé). Environ 22% des modifieurs ne sont pas des adjectifs ou des adverbes (noms, prepositions, déterminants...). Ceux-ci sont principalement des inverseurs et des concessifs. Il semble donc que des stratégies plus diverses soient utilisées pour exprimer la concession et l'inversion que pour l'intensification. L'intensification agit en effet souvent de façon plus directe sur un terme polarisé grâce à une relation syntaxique locale. Au contraire, l'apport d'une nuance, quelle qu'elle soit, ou d'une inversion, s'exprimera plutôt au niveau de l'organisation du discours, ou grâce à des formulations plus complexes. Il serait intéressant de se pencher plus particulièrement sur ces phénomènes.
Évaluation
Résultats de l'extraction
Évaluation des résultats
Erreurs ou incohérences de l'extraction
Lors de notre évaluation manuelle, un certain nombre de résultats se sont révélés être réellement inappropriés et ne pas avoir leur place dans le contexte d'une évaluation ou d'une opinion. Tout d'abord, 17 éléments proviennent d'erreurs d'extraction ou d'erreurs induites par la méthodologie (erreurs de l'analyse syntaxique ou dans le lexique). Les scores de ces éléments sont généralement peu élevés. D'autre part, une cinquantaine de structures se sont avérées plus problématiques, dans la mesure où il est difficile de déterminer en quoi elles ont un impact sur la polarité d'un terme. Certaines font référence à des constructions très courantes, que l'analyse statistique a jugées significatives. C'est le cas du syntagme nominal introduit par le déterminant indéfini un, classé comme intensifieur avec un score de 9. Ces cas obtiennnent parfois des scores élevés. D'autres structures apparaissent dans des contextes divers. Il est alors difficile de leur définir un rôle sémantique clair. Enfin, nous avons considéré que 3 structures extraites effectivement modifiantes, reprises dans la table 10, ont été mal classées. Deux d'entre elles ont un score très faible.
Structure
Score Type du modifieur OBJ(<VERB : :>,<PRON :rien :>) 4 Concessif NMOD_POSIT1(<NOUN : :>,<ADJ :bon :>) 1 Inverseur NMOD_POSIT1(<NOUN : :>,<ADJ :efficace :>) 1 Inverseur
Performance de l'extraction
Au terme de cette étude, une première remarque doit être faite sur le nombre d'extractions. Il s'avère en effet moins important que l'on aurait pu le supposer au départ, et peu de structures obtiennent un score élevé (30 éléments avec un score de 5 ou plus, et 16 avec 7 ou plus). Cela s'explique en partie par le fait que les relations de dépendance que l'on extrait, relativement précises, ont chacunes des fréquences peu élevées dans le corpus, ce qui complique l'analyse statistique. En ce qui concerne la performance proprement dite du système, 87 structures ont donc été considérées comme bien classées sur les 243 structures proposées, soit environ 35%. Il nous faut mesurer ce résultat par le fait qu'aucun seuil minimal de score n'a été appliqué sur la liste complète. Les structures qui obtiennent un score élevé sont cependant relativement pertinentes. Ainsi sur les 30 structures ayant un score de 5 ou plus, 23 ont été jugées pertinentes (et 14 structures pertinentes sur 16 pour un score égal à 7 ou plus). D'autre part, la méthodologie n'extrait finalement qu'environ 30% de réelles incohérences. Certaines extractions sont en effet pertinentes dans le langage de l'évaluation, comme nous l'avons montré précédemment. Enfin, le système a tendance à déterminer de façon correcte le type du modifieur lorsque celui-ci est pertinent. Trois éléments seulement se révèlent mal classés. Ces résultats sont récapitulés dans la table 13.
Conclusion et perspectives
Le travail présenté ici identifie des modifieurs de polarité grâce à l'étude de structures lexicosyntaxiques qui mettent en jeu un terme polarisé. Cette étude se révèle être un bon point de départ pour se rendre compte concrètement de divers phénomènes de modification. Elle met en avant en particulier, de par la méthodologie utilisée, les éléments qui ont un impact direct et local sur un terme d'une certaine polarité. Il s'avère que ces éléments sont relativement limités. D'autres stratégies, plus diverses et complexes, apparaissent. Ces stratégies expriment souvent une atténuation ou une inversion, se situent plutôt au niveau de l'organisation du discours et associent fréquemment plusieurs termes polarisés.
D'une part, il sera nécessaire de compléter cette analyse qualitative par une analyse quantitative en intégrant les modifieurs extraits ici dans un système de fouille d'opinion. L'objectif est de savoir si la prise en compte de ces structures modifiantes améliore la détection de la polarité d'un syntagme ou d'une phrase. D'autre part, cette étude a montré l'intérêt d'approfondir la recherche sur les stratégies de
total enchantement. NMOD_POSIT1(<NOUN : :>,<ADJ :véritable :>) 5 (...) un véritable nanar sans intérêt (...) NMOD_POSIT1(<NOUN : :>,<ADJ :tel :>) 5 Dommage que les personnages, les gags et le scénario (...) dégagent un tel ennui. ADJMOD(<ADJ : :>,<ADV :profondément :>) 4 Un film profondément généreux. NMOD_POSIT1(<NOUN : :>,<NOUN :année :>) 2 (...) le ratage le plus spectaculaire et inattendu de l'année.
et le ton (...) ne fonctionnent pas. PREPOBJ(<NOUN : :>,<PREP :sans :>) 8 C'est un ouvrage sans grâce. ADJMOD(<ADJ : :>,<ADV :jamais :>) 5 Ici, c'est épuisant et jamais crédible. VMOD(<VERB : :>,<CONJQUE :que :>) 5 un exercice de style où la vie ne palpite que trop rarement. PREPOBJ(<NOUN : :>,<PREP :en dépit de :>) 3 Hélas, en dépit de la générosité du propos, (....) arrache cependant quelques sourires désabusés. PREPOBJ(<VERB : :>,<PREP :par :>) 7,5 (...) finit par séduire ; (...) finit par lasser CONNECT(<VERB : :>,<CONJ :si :>) 7,5 Mais, avouons-le, si le mélo fonctionne c'est surtout grâce à (...) NMOD_POSIT1(<NOUN : :>,<ADJ :certain :>) 7 (...) une certaine froideur habite son exercice de style virtuose. NMOD_POSIT1(<NOUN : :>,<ADJ :inégal :>) 4 (...) avec un bonheur inégal. PREPOBJ(<NOUN : :>,<PREP :malgré :>) 4 Bref, malgré la sincérité du réalisateur, Bella Ciao est un film raté. VMOD_POSIT1(<VERB : :>,<NOUN :déplaisir :>) 3,5 (...) cette comédie (...) se déguste sans déplaisir.
TABLE 1 -
1Les différentes caractéristiques d'un syntagme extrait un point de vue statistique en construisant une table de contingence pour chaque structure selon ces deux critères (nous ne retenons que les structures ayant une fréquence de plus de 20 quand elles sont associées à au moins une des deux polarités). Nous analysons chaque table au moyen du test du chi-carré, et évaluons de cette manière s'il y a indépendance entre la note de la critique et la fréquence de la relation. Nous retenons ensuite les relations pour lesquelles le résultat du test est significatif (seuil de 0,05) et en calculons les résidus ajustés. De cette façon, nous mettons en évidence les structures lexico-syntaxiques dont la note de la critique et la polarité du terme associé ont un effet sur leur distribution dans le corpus. Nous nous appuyons enfin sur la valeur des résidus ajustés significatifs (résidus dont la probabilité est inférieure à 0,05) pour parler d'une relation surreprésentée (résidu ajusté positif) ou sous-représentée (résidu ajusté négatif) dans une note particulière avec un terme d'une certaine polarité. La table 2 montre les résultats statistiques obtenus pour le syntagme nominal modifié par l'adjectif total. Lorsqu'elle est associée à un nom positif, cette structure obtient un chi-carré de 16,99, correspondant à une p-value de 0,002, valeur inférieure au seuil de 0,05. Nous considérons donc le test significatif, et calculons les résidus ajustés. Les valeurs des résidus ajustés significatifs indiquent que la structure, lorsqu'elle est associée à un nom positif, est sous-représentée dans les notes 2 et 3, et surreprésentée dans les notes 4 et 5. La même analyse est effectuée lorsque total est associé à un nom négatif. Ainsi, le syntagme s'avère notamment surreprésenté dans les critiques positives avec un nom positif (« (...) un film d'une finesse totale ») et surreprésenté dans les critiques négatives avec un nom négatif (« (...) d'une bêtise abyssale, d'une abjection totale. »)Afin de juger de l'impact des syntagmes extraits sur la polarité, nous retirons le terme polarisé
et recherchons au sein du corpus la structure lexico-syntaxique obtenue. Cette méthode nous
a permis de déterminer la fréquence d'apparition de chacune de ces structures dans le corpus
selon deux critères : (1) la note de la critique, (2) la polarité du terme polarisé. Nous analysons
ces données d'
TABLE 2 -
2Caractéristiques statistiques de l'adjectif total
table 4 ,
4remplissent souvent un rôle d'atténuateur ou d'inverseur : « Ici, c'est épuisant et jamais crédible. ». 3. Les adverbes surreprésentés dans les notes mitigées, comme c'est le cas pour l'adverbe parfois (table 5) sont souvent inclus dans des structures rhétoriques plus larges comme des mécanismes d'opposition, de concession ou de comparaison : « La poésie trash du réalisateur ne faiblit pas, contrairement au rythme de son récit parfois répétitif. ».Sur ces constatations, nous avons donc dégagé trois catégories d'adverbes ayant un rôle séman-
tique différent. Pour plus de facilité pour la suite de notre étude, nous abrégeons ces différentes
catégories par intensifieurs (point 1 ci-dessus), inverseurs (point 2) et concessifs (point 3), bien
que ces termes soient réducteurs.
ADJMOD(<ADJ : :>,<ADV :profondément :>)
Avec mots positifs
1/5 2/5 3/5 4/5 5/5
surreprésenté
•
•
non-significatif
•
sous-représenté
•
•
TABLE 3 -
3Surreprésentations et sous-représentations de l'adverbe profondémentADJMOD(<ADJ : :>,<ADV :jamais :>)
Avec mots négatifs
1/5 2/5 3/5 4/5 5/5
surreprésenté
•
•
non-significatif
sous-représenté
•
•
•
TABLE 4 -
4Surreprésentations et sous-représentations de l'adverbe jamaisADJMOD(<ADJ : :>,<ADV :parfois :>)
Avec mots positifs
1/5 2/5 3/5 4/5 5/5
surreprésenté
•
non-significatif
•
•
•
sous-représenté
•
Avec mots négatifs
1/5 2/5 3/5 4/5 5/5
surreprésenté
•
non-significatif
•
•
sous-représenté
•
•
TABLE 5 -
5Surreprésentations et sous-représentations de l'adverbe parfois
TABLE 6 -
6Liste des 10 premières structures extraites obtenant le score le plus élevé.
Pour évaluer la pertinence des résultats, nous avons parcouru manuellement la liste des 243 structures extraites afin de déterminer dans quelle mesure elles modifient effectivement la polarité d'un terme et de quelle façon. Il s'agit donc d'une évaluation qualitative des résultats obtenus.Bien entendu, il sera indispensable d'effectuer une analyse quantitative et d'évaluer l'apport de
cette liste de modifieurs au sein d'une application de fouille d'opinion. Ces deux évaluations n'ont
pas le même objectif et se complètent, c'est pourquoi il est intéressant de les mener à bien toutes
les deux.
De nombreux retours sur corpus ont été nécessaires pour comprendre le rôle des structures ex-
traites dans une critique. Certaines structures font référence à des constructions lexico-syntaxiques
plus larges, que l'analyseur syntaxique ne peut pas restituer dans son intégralité. C'est le cas, par
exemple, de la relation PREPOBJ d'un verbe modifié par la préposition par, qui fait référence à la
construction plus large : [il finit par -VB]). Dans ce cas, nous avons jugé et classé les structures
ou expressions dans leur intégralité. Nous les classons donc comme correctes lorsqu'elles sont
effectivement modifiantes.
3.2.1 Cas pertinents
Notre évaluation manuelle nous a conduit à juger 87 structures pertinentes sur 243, dont 37
intensifieurs, 22 inverseurs et 27 concessifs. Le dernier modifieur est la structure adjectivale
modifiée par l'adverbe peu classée autant comme un inverseur que comme un concessif (avec
un score de 4 pour les deux catégories). Cette double fonction inverseur-concessif n'est pas
incohérente dans l'absolu. Pour l'adverbe peu, en particulier, il est parfaitement concevable de lui
attribuer un rôle d'atténuateur (« Pour addicts peu exigeants. »), mais aussi de le voir intégrer une
structure comparative ou concessive plus large (« Aussi peu réaliste que morale »). En revanche, le
double classement intensifieur-inverseur est plus problématique en soi. Aucune des 8 structures
dans cette situation (comme la préposition à travers ou le complément du nom homme) n'a
été retenue. Les structures mises en évidence ici sont variées. Elles sont souvent composées
d'adverbes et d'adjectifs, catégories syntaxiques largement utilisées dans d'autres travaux du
domaine. Toutefois d'autres structures rarement traitées apparaissent également. Nous détaillons
ici les résultats pour chaque catégorie de modifieurs.
TABLE 7 -
7Exemple d'intensifieurs
Ensuite, les 22 inverseurs ont des formes plus hétérogènes, les catégories syntaxiques qui les
composent sont plus diverses. Il s'agit également en majorité d'adverbes, mais peu d'adjectifs
sont présents. Les stratégies utilisées pour exprimer l'inversion ou l'atténuation semblent en effet
plus variées. Les structures qui ont obtenu les scores les plus hauts sont clairement des inverseurs,
comme on peut le voir dans la table 8. Notons que la structure VMOD(<VERB : :>,<CON-
JQUE :que :>) correspond à la construction syntaxique restrictive "ne..que".
TABLE 9 -
9Exemple de concessifsAprès ce tour d'horizon, il est intéressant de se pencher sur la nature des éléments pertinents.
Précédemment, nous avions limité notre étude aux adverbes et montré leur pertinence en tant
que modifieurs de polarité. Au vu des résultats obtenus ici pour les autres catégories syntaxiques,
ils semblent occuper un rôle central. Signalons tout de même que, dans la mesure où la catégorie
des adverbes est relativement fermée, ils vont être plus fréquents et donc plus facilement déclarés
significatifs par le test. Dans cet article, nous avons cherché à évaluer la pertinence des autres
catégories syntaxiques. Les résultats sont moins nombreux, mais ils viennent compléter notre
lexique et apportent donc une information non-négligeable.
1. Verbes : Très peu de verbes sont extraits et aucun n'est pertinent. Certaines structures
verbales sont extraites cependant par l'intermédiaire d'autres relations de dépendance (des
relations qui mettent en jeu des prépositions, par exemple). C'est le cas de l'expression [il
finit par -VB]. On pourrait envisager d'autres expressions pertinentes comme [être loin
de -VB] ou [passer à côté de -NOM]. Il serait intéressant d'étudier à quel point les verbes
peuvent effectivement avoir un impact sur un terme polarisé.
TABLE 10 -
10expression d'une polarité ou d'une évaluation. Ce ne sont pas des modifieurs, mais ils apportent un éclairage intéressant sur les phénomènes qui apparaissent dans le contexte de termes polarisés. Parmi ces phénomènes contextuels particuliers, 19 éléments font en fait partie du vocabulaire du cinéma et désignent des caractéristiques du film soumises à jugement, et donc, dans ce cadre souvent associées à des termes positifs ou négatifs (table 11). <NOUN : :>,<NOUN :effet :>) « surenchère d'effets » NMOD_POSIT1(<NOUN : :>,<NOUN :dialogue :>) « pauvreté des dialogues », « musicalité des dialogues » TABLE 11 -Constructions contenant des termes issus du vocabulaire du cinéma Nous avons ensuite classé 30 éléments comme étant eux-mêmes polarisés. Comme notre système se base sur des structures lexico-syntaxiques, certains éléments polarisés s'avèrent être des structures complexes. C'est le cas par exemple du complément du nom "de maître", clairement positif, comme on peut le voir dans les expressions "main de maître" ou "travail de maître". Les lexiques de polarité classiques prennent souvent peu en compte ce type d'expressions à mots multiples, se concentrant sur les mots simples. Il serait avantageux d'utiliser également ces expressions polarisées pour une tâche automatique de fouille d'opinion. Enfin, nous extrayons également un certain nombre de structures diverses (36) qui jouent un rôle dans l'expresson d'une évaluation. Il n'est pas possible de les considérer comme des modifieurs, dans le sens où elles n'ont pas un réel impact sur un terme polarisé, mais elles occupent une place importante dans le langage évaluatif. Il ne s'agit pas de structures figées plus larges, que l'analyseur syntaxique n'a pas pu restituer dans son ensemble, mais plutôt d'expressions ou formulations diverses souvent utilisées qui aceptent de nombreuses variantes. Ici, elles sont souvent le reflet de formulations utilisées pour juger un film. Des exemples en sont donnés dans la table 12. Ces structures peuvent avoir différents rôles. Certaines sont plutôt positives et négatives dans le strict contexte du cinéma. D'autres semblent plus servir à introduire un jugement, sans être en elles-mêmes polarisées. <VERB : :>,<PREP :à :>) « c'est à voir » ; « c'est à découvrir » OBJ(<VERB : :>,<PRON :ça :>)« (...) n'a pas mérité ça » ; « on a vu ça 100 fois » ; « on n'a jamais vu ça » NMOD_POSIT1(<NOUN : :>,<NOUN :série :>) « comédie de série b » ; « série z » ; « série télé » OBJ(<VERB : :>,<NOUN :intérêt :>) « n'offre aucun intérêt » ; « éveille l'intérêt » ; « capte l'intérêt »TABLE 12 -Constructions évaluatives diverses3 structures mal classées
3.2.3 Autres phénomènes contexuels
A côté des cas corrects et des réelles incohérences, notre système met également en évidence
des éléments qui jouent un certain rôle dans l'
TABLE 13 -
13Répartition des modifieurs évalués manuellementEn conclusion, notre méthodologie nous a permis d'identifier des modifieurs pertinents en dépassant le cadre de l'adverbe et d'extraire des éléments variés, typiques du langage de l'évaluation. Elle met en avant des phénomènes peu traités dans d'autres travaux, et plus détaillés, dans la mesure où l'on traite de structures lexico-syntaxiques et non de termes simples. Les résultats ont permis de mettre en avant deux fonctionnements un peu différents : l'intensification, portée par des adjectifs et des adverbes, souvent avec un impact direct sur un terme polarisé, et l'expression de l'inversion, de l'atténuation ou d'une nuance, portée par des contructions plus diverses et complexes.
. Cette catégorisation pourra être améliorée ou modifiée selon nos différentes investigations dans le domaine et notamment selon les résultats obtenus dans cette étude.
. Noms : Peu de noms se révèlent être des modifieurs pertinents. De nombreux noms ont pourtant été extraits. C'est la catégorie la plus fréquemment source d'erreur. Ils semblent avoir effectivement un impact important, mais cet impact est difficile à justifier et à juger.
) définition de règles de composition (afin de pouvoir déterminer la polarité d'une phrase qui contient plusieurs termes polarisés). Deux pistes, en particulier, seraient intéressantes à explorer : (1) étude approfondie des phénomènes d'inversion de la polaritésans se limiter aux relations de dépendance syntaxique. Deux pistes, en particulier, seraient intéressantes à explorer : (1) étude approfondie des phénomènes d'inversion de la polarité, (2) définition de règles de composition (afin de pouvoir déterminer la polarité d'une phrase qui contient plusieurs termes polarisés).
Robustness beyond shallowness : Incremental deep parsing. Aït-Mokhtar Références, S Chanod, J Roux, C , Natural Language Engineering. 82-3Références AÏT-MOKHTAR, S., CHANOD, J. et ROUX, C. (2002). Robustness beyond shallowness : Incremental deep parsing. Natural Language Engineering, 8(2-3):121-144.
Extraction automatique de modifieurs de valence affective dans un texte. etude exploratoire appliquée au cas de l'adverbe. N Boubel, Travaux du Cercle belge de Linguistique. 6BOUBEL, N. (2011). Extraction automatique de modifieurs de valence affective dans un texte. etude exploratoire appliquée au cas de l'adverbe. In Travaux du Cercle belge de Linguistique, volume 6.
Une procédure pour identifier les modifieurs de la valence affective d'un mot dans des textes. N Boubel, Y Bestgen, Actes de TALN11. s de TALN11Montpellier2BOUBEL, N. et BESTGEN, Y. (2011). Une procédure pour identifier les modifieurs de la valence affective d'un mot dans des textes. In Actes de TALN11, volume 2, pages 137-142, Montpellier.
Polarity sensitivity as (non) veridical dependency. A Giannakidou, J. Benjamins. 23GIANNAKIDOU, A. (1998). Polarity sensitivity as (non) veridical dependency, volume 23. J. Benjamins.
Sentiment classification of movie reviews using contextual valence shifters. A Kennedy, D Inkpen, Computational Intelligence. 222KENNEDY, A. et INKPEN, D. (2006). Sentiment classification of movie reviews using contextual valence shifters. Computational Intelligence, 22(2):110-125.
The impact of valence shifters on mining implicit economic opinions. C Musat, S Trausan-Matu, Artificial Intelligence : Methodology, Systems, and Applications. MUSAT, C. et TRAUSAN-MATU, S. (2010). The impact of valence shifters on mining implicit economic opinions. Artificial Intelligence : Methodology, Systems, and Applications, pages 131- 140.
Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval. B Pang, L Lee, 2PANG, B. et LEE, L. (2008). Opinion mining and sentiment analysis. Foundations and Trends in Information Retrieval, 2(1-2).
Composition multilingue de sentiments. S Petrakis, M Klenner, E Ailloud, A Fahrni, Actes de TALN2009. s de TALN2009SenlisPETRAKIS, S., KLENNER, M., AILLOUD, E. et FAHRNI, A. (2009). Composition multilingue de sentiments. In Actes de TALN2009, Senlis.
A comprehensive grammar of the English language. R Quirk, S Greenbaum, G Leech, J Svartvik, D Crystal, Cambridge Univ Press397QUIRK, R., GREENBAUM, S., LEECH, G., SVARTVIK, J. et CRYSTAL, D. (1985). A comprehensive grammar of the English language, volume 397. Cambridge Univ Press.
Lexicon-based methods for sentiment analysis. M Taboada, J Brooke, M Tofiloski, K Voll, M Stede, Computational Linguistics. 372TABOADA, M., BROOKE, J., TOFILOSKI, M., VOLL, K. et STEDE, M. (2011). Lexicon-based methods for sentiment analysis. Computational Linguistics, 37(2):267-307.
Catéorisation des évaluations dans un corpus de blogs multi-domaine. M Vernier, L Monceaux, B Daille, E Dubreil, VERNIER, M., MONCEAUX, L., DAILLE, B. et DUBREIL, E. (2009). Catéorisation des évaluations dans un corpus de blogs multi-domaine. http ://hal.archives-ouvertes.fr/hal-00405407/fr/.
Identification de mots germes pour la construction d'un lexique de valence au moyen d'une procédure supervisée. N Vincze, Y Bestgen, Actes de TALN11. s de TALN11Montpellier1VINCZE, N. et BESTGEN, Y. (2011). Identification de mots germes pour la construction d'un lexique de valence au moyen d'une procédure supervisée. In Actes de TALN11, volume 1, pages 223-234, Montpellier.
Recognizing contextual polarity : An exploration of features for phrase-level sentiment analysis. T Wilson, J Wiebe, P Hoffmann, Computational linguistics. 353WILSON, T., WIEBE, J. et HOFFMANN, P. (2009). Recognizing contextual polarity : An exploration of features for phrase-level sentiment analysis. Computational linguistics, 35(3):399-433.
Contextual valence shifters. A Zaenen, L Polanyi, Proceedings of AAAI Spring Symposium on Exploring Attitude and Affect in Text. AAAI Spring Symposium on Exploring Attitude and Affect in TextZAENEN, A. et POLANYI, L. (2004). Contextual valence shifters. In Proceedings of AAAI Spring Symposium on Exploring Attitude and Affect in Text, pages 106-111.
F Zwarts, Nonveridical contexts. Linguistic Analysis. 25ZWARTS, F. (1995). Nonveridical contexts. Linguistic Analysis, 25:286-312. |
9,507,963 | Using Parsed Corpora for Structural Disambiguation in the TRAINS Domain | This paper describes a prototype disambiguation module, KANKEI, which was tested on two corpora of the TRAINS project. In ambiguous verb phrases of form V ... NP PP or V ... NP adverb(s), the two corpora have very different PP and adverb attachment patterns; in the first, the correct attachment is to the VP 88.7% of the time, while in the second, the correct attachment is to the NP 73.5% of the time. KANKEI uses various n-gram patterns of the phrase heads around these ambiguities, and assigns parse trees (with these ambiguities) a score based on a linear combination of the frequencies with which these patterns appear with NP and VP attachments in the TRAINS corpora.Unlike previous statistical disambiguation systems, this technique thus combines evidence from bigrams, trigrams, and the 4-gram around an ambiguous attachment. In the current experiments, equal weights are used for simplicity but results are still good on the TRAINS corpora (92.2% and 92.4% accuracy). Despite the large statistical differences in attachment preferences in the two corpora, training on the first corpus and testing on the second gives an accuracy of 90.9%. | [
4683457,
62536391,
543
] | Using Parsed Corpora for Structural Disambiguation in the TRAINS Domain
Mark Core mcore@cs.rochester.edu
Department of Computer Science
University of Rochester Rochester
14627New York
Using Parsed Corpora for Structural Disambiguation in the TRAINS Domain
This paper describes a prototype disambiguation module, KANKEI, which was tested on two corpora of the TRAINS project. In ambiguous verb phrases of form V ... NP PP or V ... NP adverb(s), the two corpora have very different PP and adverb attachment patterns; in the first, the correct attachment is to the VP 88.7% of the time, while in the second, the correct attachment is to the NP 73.5% of the time. KANKEI uses various n-gram patterns of the phrase heads around these ambiguities, and assigns parse trees (with these ambiguities) a score based on a linear combination of the frequencies with which these patterns appear with NP and VP attachments in the TRAINS corpora.Unlike previous statistical disambiguation systems, this technique thus combines evidence from bigrams, trigrams, and the 4-gram around an ambiguous attachment. In the current experiments, equal weights are used for simplicity but results are still good on the TRAINS corpora (92.2% and 92.4% accuracy). Despite the large statistical differences in attachment preferences in the two corpora, training on the first corpus and testing on the second gives an accuracy of 90.9%.
Introduction
The goal of the TRAINS project is to build a computerized planning assistant that can interact conversationally with its user. The current version of this planning assistant, TRAINS 95, is described in (Allen et al., 1995); it passes speech input onto a parser whose chart is used by the dialog manager and other higher-level reasoning components. The planning problems handled involve moving several trains from given starting locations to specified destinations on a map display (showing a network of rail lines in the eastern United States). The 95 dialogs are a corpus of people's utterances to the TRAINS 95 system; they contain 773 instances of PP or adverb postmodifiers that can attach to either NPs or VPs. Many of these cases were unambiguous, as there was no NP following the VP, or the NP did not follow a verb. Only 275 utterances contained ambiguous constructions and in 73.5% of these, the correct PP/adverb attachment was to the NP. One goal of the TRAINS project is to enhance the TRAINS 95 system sufficiently to handle the more complex TRAINS 91-93 dialogs. This corpus was created between 1991 and 1993 from discussions between humans on transportation problems involving trains. The dialogs deal with time constraints and the added complexity of using engines to pick up boxcars and commodities to accomplish delivery goals. This corpus contains 3201 instances of PP or adverb postmodifiers that can attach to either NPs or VPs. 1573 of these examples contained both an NP and a VP to which the postmodifier could attach. The postmodifier attached to the VP in 88.7% of these examples. On average, a postmodifier attachment ambiguity appears in the 91-93 dialogs after about 54 words, which is more frequent than the 74 word average of the 95 dialogs. This suggests that a disambiguation module is going to become necessary for the TRAINS system. This is especially true since some of the methods used by TRAINS 95 to recover from parse errors will not work in a more complex domain. For instance in the 95 dialogs, a PP of form at city-name can be assumed to give the current location of an engine that is to be moved. However, this is not true of the 91-93 dialogs where actions such as load often take at city-name as adjuncts.
2
Methodology
KANKEI I is a first attempt at a TRAINS disambiguation module. Like the systems in (Hindle and Rooth, 1993) and (Collins and Brooks, 1995), KANKEI records attachment statistics on informa-1From the Japanese word, kankei, meaning "relation." tion extracted from a corpus. This information consists of phrase head patterns around the possible locations of PP/adverb attachments. Figure 1 shows how the format of these patterns allows for combinations including a verb, NP-head (rightmost NP before the postmodifier), and either the preposition and head noun in the PP, or one or more adverbs. 2 These patterns are similar to ones used by the disambiguation system in (Collins and Brooks, 1995) and (Brill and Resnik, 1994) except that Brill and Resnik form rules from these patterns while KANKEI and the system of Collins and Brooks use the attachment statistics of multiple patterns. While KANKEI combines the statistics of multiple patterns to make a disambiguation decision, Collins and Brooks' model is a backed-off model that uses 4-gram statistics where possible, 3-gram statistics where possible if no 4-gram statistics are available, and bigram statistics otherwise.
verb NP-head (preposition obj-head I adverbl adverb2)
Figure 1: Format of an attachment pattern
Most items in this specification are optional. The only requirement is that patterns have at least two items: a preposition or adverb and a verb or NPhead. The singular forms of nouns and the base forms of verbs are used. These patterns (with hyphens separating the items) form keys to two hash tables; one records attachments to NPs while the other records attachments to VPs. Numbers are stored under these keys to record how often such a pattern was seen in a not necessarily ambiguous VP or NP attachment. Sentence 1 instantiates the longest possible pattern, a 4-gram that here consists of need, orange, in, and Elmira.
I) I need the oranges in Elmira.
The TRAINS corpora are much too small for KANKEI to rely only on the full pattern of phrase heads around an ambiguous attachment.
While searching for attachment statistics for sentence 1, KANKEI will check its hash tables for the key need-orange-in-Elmira. If it relied entirely on full patterns, then if the pattern had not been seen, KANKEI would have to randomly guess the attachment. Such a technique will be referred to as full matching. Normally KANKEI will do partial matching, i.e., if it cannot find a pattern such as need-orange-in-Elmira, it will look for smaller partial patterns which here would be: need-in, orange-in, orange-in-Elmira, need-in-Elmira, and need-orangein. The frequency with which NP and VP attachment occurs for these patterns is totaled to see if one attachment is preferred. Currently, we count partial patterns equally, but in future refinements we would 2Examples of trailing adverb pairs are first off and right now.
like to choose weights more judiciously. For instance, we would expect shorter patterns such as need-in to carry less weight than longer ones. The need to choose weights is a drawback of the approach. However, the intuition is that one source of evidence is insufficient for proper disambiguation. Future work needs to further test this hypothesis.
The statistics used by KANKEI for partial or full matching can be obtained in various ways. One is to use the same kinds of full and partial pattern matching in training as are used in disambiguation. This is called comprehensive training. Another method, called raw training, is to record only full patterns for ambiguous and unambiguous attachments in the corpus. (Note that full patterns can be as small as bigrams, such as when an adverb follows an NP acting as a subject.) Although raw training only collects a subset of the data collected by comprehensive training, it still gives KANKEI some flexibility when disambiguating phrases. If the full pattern of an ambiguity has not been seen, KANKEI can test whether a partial pattern of this ambiguous attachment occurred as an unambiguous attachment in the training corpus. Like the disambiguation system of (Brill and Resnik, 1994), KANKEI can also use word classes for some of the words appearing in its patterns. The rudimentary set of noun word classes used in this project is composed of city and commodity classes and a train class including cars and engines.
3
Measure of Success
One hope of this project is to make generalizations across corpora of different domains. Thus, experiments included trials where the 91-93 dialogs were used to predict the 95 dialogs 3 and vice versa. Experiments on the effect of training and testing KANKEI on the same set of dialogs used cross validation; several trials were run with a different part of the corpus being held out each time. In all these cases, the use of partial patterns and word classes was varied in an attempt to determine their effect. Notice that all of these results involve either word classes or partial patterns. There is a difference of at least 30 attachments (1.9% accuracy) between the best results in these tables and the results that did not use word classes or partial patterns. Thus, it appears that at least one of these methods of generalization is needed for this highdimensional space. The 93 dialogs predicted attachments in the 95 test data with a success rate of 90.9% which suggests that KANKEI is capable of making generalizations that are independent of the corpus from which they were drawn. The overall accuracy is high: the 95 data was able to predict itself with an accuracy of 92.2%, while the 93 data predicted itself with an accuracy of 92.4%.
Discussion and Future Work
The results for the TRAINS corpora are encouraging. We would also like to explore how KANKEI performs in a more general domain such as the Wall Street Journal corpus from the Penn Treebank. We could then compare results with Collins and Brooks' disambiguation system which was also tested using the Penn Treebank's Wall Street Journal corpus. Weighting the n-grams in a nonuniform manner should improve accuracy on the TRAINS corpora as well as in more general domains. (Alshawi and Carter, 1994) address a related problem, weighting scores from different disambiguation systems to obtain a single rating for a parse tree. They achieved good results using a hill climbing technique to ex-plore the space of possible weights. Another possible technique for combining evidence is the maximumentropy technique of (Wu, 1993). We are also considering using logical forms (instead of word and word classes) in collocation patterns.
The integration of KANKEI with the TRAINS parser needs to be completed. As a first attempt, when the TRAINS parser tries to extend the arcs associated with the rules: VP -> VP (PP[ADV) and NP -> NP (PP[ADV), KANKEI will adjust the probabilities of these arcs based on attachment statistics. 4 Ultimately, the TRAINS disambiguation module will contain functions measuring rule habituation and distance effects. Then it will become necessary to weight the scores of each disambiguation technique according to its effectiveness. The ability to adjust probabilities based on evidence seen is an advantage over rule-based approaches. This advantage is obtained at the expense of storing all the patterns seen.
Table 2 :
2Results of training and testing on the 95
dialogs
Word Classes
Raw Training
P. Matching
Default Guess
% by Default
% Accuracy
Yes
No
Yes
Yes
No
No
Yes
No
Yes
Yes
No
No
VP
VP
VP
VP
91.0 91.0
Table 3 :
3Results of training and testing on the 93
dialogs
The rows labeled % by Default give the portion of
the total success rate (last row) accounted for by
KANKEI's default guess. The results of training on
the 95 data and testing on the 93 data are not shown
because the best results were no better than always
attaching to the VP.
AcknowledgmentsThis work was supported in part by National Science Foundation grant IRI-95033312. Len Schubert's supervision and many helpful suggestions are gratefully acknowledged. Thanks also to James Allen for his helpful comments.
Spoken dialogue and interactive planning. James Allen, George Ferguson, Bradford Miller, Eric Ringger, Proc. of the ARPA Spoken Language Technology Workshop. of the ARPA Spoken Language Technology WorkshopAustin, TXJames Allen, George Ferguson, Bradford Miller, and Eric Ringger. 1995. Spoken dialogue and inter- active planning. In Proc. of the ARPA Spoken Language Technology Workshop, Austin, TX.
Training and scaling preference functions. Hiyan Alshawi, David Carter, Computational Linguistics. 204Hiyan Alshawi and David Carter. 1994. Training and scaling preference functions. Computational Linguistics, 20(4):635-648.
A rule-based approach to prepositionM phrase attachment disambiguation. Eric Brill, Philip Resnik, Proc. of 15th International Conference on Computational Linguistics. of 15th International Conference on Computational LinguisticsKyoto, JapanEric Brill and Philip Resnik. 1994. A rule-based ap- proach to prepositionM phrase attachment disam- biguation. In Proc. of 15th International Confer- ence on Computational Linguistics, Kyoto, Japan.
Prepositional phrase attachment through a backed-off model. Michael Collins, James Brooks, Proc. of the 3rd Workshop on Very Large Corpora. of the 3rd Workshop on Very Large CorporaBoston, MAMichael Collins and James Brooks. 1995. Prepo- sitional phrase attachment through a backed-off model. In Proc. of the 3rd Workshop on Very Large Corpora, pages 27-38, Boston, MA.
Structural amiguity and lexical relations. Donald Hindle, Mats Rooth, Computational Linguistics. 191Donald Hindle and Mats Rooth. 1993. Structural amiguity and lexical relations. Computational Linguistics, 19(1):103-120.
Estimating probability distributions over hypotheses with variable unification. Dekai Wu, Proc. of the 11th National Conference on Artificial Intelligence. of the 11th National Conference on Artificial IntelligenceWashington D.C.4The TRAINS parser is probabilistic although the probabilities are parse scores not formal probabilitiesDekai Wu. 1993. Estimating probability distribu- tions over hypotheses with variable unification. In Proc. of the 11th National Conference on Artifi- cial Intelligence, pages 790-795, Washington D.C. 4The TRAINS parser is probabilistic although the probabilities are parse scores not formal probabilities. |
17,934,558 | The More the Better? Assessing the Influence of Wikipedia's Growth on Semantic Relatedness Measures | Wikipedia has been used as a knowledge source in many areas of natural language processing. As most studies only use a certain Wikipedia snapshot, the influence of Wikipedia's massive growth on the results is largely unknown. For the first time, we perform an in-depth analysis of this influence using semantic relatedness as an example application that tests a wide range of Wikipedia's properties. We find that the growth of Wikipedia has almost no effect on the correlation of semantic relatedness measures with human judgments, while the coverage steadily increases. | [
15754496,
405,
10089399,
6896607,
6181951
] | The More the Better? Assessing the Influence of Wikipedia's Growth on Semantic Relatedness Measures
Torsten Zesch
Ubiquitous Knowledge Processing Lab Computer Science Department
Technische Universität Darmstadt Hochschulstraße
10D-64289DarmstadtGermany
Iryna Gurevych
Ubiquitous Knowledge Processing Lab Computer Science Department
Technische Universität Darmstadt Hochschulstraße
10D-64289DarmstadtGermany
The More the Better? Assessing the Influence of Wikipedia's Growth on Semantic Relatedness Measures
Wikipedia has been used as a knowledge source in many areas of natural language processing. As most studies only use a certain Wikipedia snapshot, the influence of Wikipedia's massive growth on the results is largely unknown. For the first time, we perform an in-depth analysis of this influence using semantic relatedness as an example application that tests a wide range of Wikipedia's properties. We find that the growth of Wikipedia has almost no effect on the correlation of semantic relatedness measures with human judgments, while the coverage steadily increases.
Introduction
Wikipedia is a multilingual, web-based, freely available encyclopedia, constructed in a collaborative effort of voluntary contributors. Wikipedia has arguably become the largest collection of freely available knowledge with currently approx. 9.25 million articles in more than 250 languages. This knowledge has been used in many areas of natural language processing (see (Medelyan et al., 2009) for an overview) to overcome the knowledge acquisition and coverage problems pertinent to conventional knowledge sources like WordNet (Fellbaum, 1998). However, most studies only performed their evaluation using a single Wikipedia snapshot available at the time of the experiments. Thus, it is largely unknown whether the results would have been different for other snapshots. The difference between snapshots might be significant, as all Wikipedia language editions grow very quickly. For example in 2008, the German Wikipedia has grown by over 150,000 articles (i.e. over 400 articles per day). 1 Figure 1 visualizes this development. In this paper, we investigate the influence of Wikipedia's growth on the performance of natural language processing tasks that use Wikipedia as a knowledge source.
In contrast to an unstructured corpus that just grows in size, Wikipedia is a structured resource that grows in different ways: (i) new articles, categories, or redirects are added, (ii) existing articles, categories, or redirects are corrected or augmented, or (iii) the links between articles or categories are changed. In our analysis, we need to ensure that all these aspects of Wikipedia's growth are tested. Thus, we selected the pervasive task of computing semantic relatedness for our studies, as the different approaches to computing semantic relatedness make use of different properties of Wikipedia like the article text, the article links, the category system, or the article titles and redirects. Additionally, computing semantic relatedness directly uses Wikipedia as a knowledge source, while more complex tasks would entail other influences. Furthermore, Wikipedia is increasingly used as a knowledge source for computing semantic 1 http://stats.wikimedia.org relatedness (Gabrilovich and Markovitch, 2007;Nakayama et al., 2007;Ponzetto and Strube, 2007;Milne and Witten, 2008;Zesch et al., 2008b). Therefore, the impact of Wikipedia's growth on the performance of semantic relatedness is an important field of study on its own. We use two approaches for the evaluation of semantic relatedness measures: (i) correlation with human judgments and (ii) solving word choice problems. The correlation with human judgments depends more directly on the performance of the semantic relatedness measures, while solving word choice problems is better suited to assess the coverage of a resource, as the available word choice datasets are significantly larger and contain more complex vocabulary than the word pair datasets used for measuring the correlation with human judgments.
The paper is structured as follows: We first describe related work in Section 2. We give a short overview of semantic relatedness measures in Section 3. We then explain our experimental setup in Section 4. We present the results in Section 5. Finally, we summarize our findings in Section 6.
Related Work
To our knowledge, there is no other study performing an in-depth analysis of the influence of Wikipedia's growth on the performance of tasks using it as a knowledge source. Ponzetto and Strube (2007) replicated their semantic relatedness experiments, which were originally performed on an English snapshot from February 2006, on two more recent snapshots from September 2006 and May 2007. They found that the choice of the snapshot had no significant influence on the performance, but they did not report the influence on coverage. Furthermore, the number of snapshots used does not allow for general conclusions. Buriol et al. (2006) create a graph representation of Wikipedia, and analyze the temporal evolution of the topological properties of this graph representation. They did not assess the consequences of the topological changes on NLP applications.
The growth of a resource is also an issue for corpusbased NLP approaches, where the size of the available corpus increases due to the growing Web and improved data processing capabilities. For corpus-based approaches, usually "more data are better data" (Church and Mercer, 1993), e.g. the quality of statistical machine translation continues to improve with increasing corpus size (Brants et al., 2007). However, these findings cannot be generalized to semantically structured corpora like Wikipedia.
Semantic Relatedness Measures
A multitude of semantic relatedness measures relying on structured knowledge sources have been proposed that can be categorized into: (i) path based, (ii) gloss based, (iii) concept vector based, and (iv) link vector based measures (Zesch and Gurevych, 2010).
Path based measures (Budanitsky and Hirst, 2006) rely on paths in a graph of concepts built from a knowledge source. Zesch et al. (2007) describe how state-of-theart path based measures can be adapted to Wikipedia.
Gloss based measures (Lesk, 1986;Mihalcea and Moldovan, 1999;Banerjee and Pedersen, 2002) rely on word overlaps between definitions of concepts. Gloss based measures can be directly applied to Wikipedia, as each Wikipedia article represents a definition of a concept. sen, 2006;Gabrilovich and Markovitch, 2007) use the textual description of a concept to construct a vector representation. The semantic relatedness between two words can then be computed as the cosine of their corresponding concept vectors.
Concept Vector based measures (Patwardhan and Peder
Link Vector based measures (Nakayama et al., 2007;Milne and Witten, 2008) use the links between concepts to construct a vector representation. The semantic relatedness between two words can then be computed as the cosine of their corresponding link vectors.
Each measure type uses different properties of Wikipedia, and thus tests different kinds of growth in Wikipedia. For example, the path based measures are only affected by changes to the assignment of articles to categories and when new articles or categories are added. They are insensitive to changes made to the textual content, while these changes have a major influence on gloss based and the concept vector based measures. However, as the concept vector based measures draw knowledge from different articles in parallel, we expect them to be more robust to changes in Wikipedia than the gloss based measures. The link vector based measures are only influenced from changes to the article links or when new articles are added. For our analysis, we select the most prototypical measure from each measure type. Note, that this is not necessarily the measure that yields the best results, as we are only interested in the changes of performance over time, not in the absolute scores.
From the path based measures, we select the simple path length measure by Rada et al. (1989). It uses the path length l between two nodes representing concepts in the knowledge source to compute semantic relatedness. We select this measure, as it is the most versatile path based measure imposing the least constraints on a resource. The measure (abbreviated as Path) is computed as follows:
dist Path (c 1 , c 2 ) = l(c 1 , c 2 )
where c 1 and c 2 are concepts, and l(c 1 , c 2 ) returns the number of edges on the path from c 1 to c 2 . The distance value can be easily transformed into a relatedness value by subtracting it from the maximum path length l max of the graph:
rel Path (c 1 , c 2 ) = l max − l(c 1 , c 2 )
From the gloss based measures, we select the simple gloss overlap measure by Lesk (1986) abbreviated as Gloss. It is based on the amount of word overlap in the glosses of two concepts, where higher overlap means that two terms are more related.
rel Gloss (c 1 , c 2 ) = |gloss(c 1 ) ∩ gloss(c 2 )|
where gloss(c i ) returns the multiset of words in a concept's gloss.
From the concept vector based measures, we select the ESA measure (Gabrilovich and Markovitch, 2007) abbreviated as ConceptVector. The measure is based on concept vectors derived from Wikipedia articles a 1 , . . . , a N , where N is the number of Wikipedia articles. Each element of the concept vector d is associated with a certain Wikipedia article a i . If the word w can be found in this article, the word's tf.idf score (Salton and McGill, 1983) in the article a i is assigned to the concept vector element d i . Otherwise, 0 is assigned.
d i = tf.idf (w), w ∈ a i 0, otherwise
As a result, the vector d(w) represents the word w in the Wikipedia concept space. Semantic relatedness of two words can then be computed as the cosine of their corresponding concept vectors:
rel Vector (w 1 , w 2 ) = d(w 1 ) · d(w 2 ) d(w 1 ) d(w 2 )
Another vector based measure (abbreviated as LinkVector) was introduced by Milne (2007) and Nakayama et al. (2007). It makes use of the links between Wikipedia articles, but it does not measure path lengths like the path based measures. The LinkVector measure is based on the set of links that point to other articles (called 'targets'). The more targets two articles share, the higher their semantic relatedness. Links to targets are considered less significant if many other articles also link to the same target. For example, a link to a very common target like automobile is less important than a link to a more specific target like Ethanol fuel. Formally, the weight ω of a link is defined as:
ω(s → t) = log N |T | , s ∈ T 0, otherwise
where T is the set all articles that link to t, and N is the number of Wikipedia articles. An article is then represented as a vector l of weighted outgoing links l. The semantic relatedness of two terms is computed as the cosine of the link weight vectors of the corresponding articles:
rel LinkVector (a 1 , a 2 ) = l(a 1 ) · l(a 2 ) l(a 1 ) l(a 2 )
where a i are Wikipedia articles corresponding to terms. An article corresponds to a term if its title or one of its redirects matches the term, or if the article is linked on a disambiguation page which title matches the term.
Experimental Setup
For our analysis, Wikipedia snapshots from different points of time are required. The Wikimedia Foundation only provides recent snapshots 2 , and we are not aware of any other repository. However, the Wikimedia Foundation provides special snapshots that contain all revisions. From such snapshots, any past state of the Wikipedia can be reconstructed. For that purpose, we designed a data conversion tool that is able to create a snapshot of any Wikipedia language edition at any point in time since it was created. We access these snapshots using the JWPL Wikipedia API (Zesch et al., 2008a). For a meaningful analysis, a large Wikipedia language version is necessary that provides sufficient coverage on the evaluation datasets. The English Wikipedia is the largest language edition, but unfortunately the recent snapshot containing all revisions is currently unavailable from the Wikimedia Foundation due to technical problems. The most recent available English snapshot containing all revisions was released in 2006, which would only provide a limited number of snapshots. Thus, we perform our experiments using the German Wikipedia which is the second largest edition that was created shortly after the English edition. Additionally, German evaluation datasets for semantic relatedness and word choice problems are available (Zesch et al., 2008b), which is not the case for most other languages. For our experiments, we created a snapshot of the German Wikipedia every 183 days (6 months) starting December 1st, 2002 (see Table 1). Each of the snapshots is used as a resource for computing semantic relatedness. We evaluate the performance of semantic relatedness measures using two evaluation approaches: (i) correlation with human judgments and (ii) solving word choice problems.
Correlation with Human Judgments Evaluation datasets for correlation with human judgments are created by asking human annotators to judge the semantic relatedness of presented word pairs. The gold standard score assigned to a word pair is the average score over all human judges. For evaluation, the gold standard dataset is then correlated with the scores computed by a particular semantic relatedness measure. We use the Spearman rank correlation coefficient ρ, where a value of 0 means no correlation and a value of 1 means perfect correlation.
We use two publicly available German datasets. 3 The Gur-65 dataset contains 65 word pairs from the English study by Rubenstein and Goodenough (1965) translated to German. This dataset only contains nouns, and human judgments rated the similarity between the words. The Gur-350 dataset contains 350 word pairs collected in a study by Gurevych (2005). This dataset contains nouns, verbs and adjectives connected by classical and nonclassical relations (Morris and Hirst, 2004) that were rated by humans according to the relatedness between the words. It also contains a lot of domain-specific word pairs. Thus, this dataset will be more informative with respect to the coverage provided by a certain Wikipedia snapshot. We define coverage as the percentage of word pairs in the evaluation dataset for which a semantic relatedness measure using a certain Wikipedia snapshot is able to compute a score, i.e. both words could be found in Wikipedia (either as an article title or mentioned in the article text, depending on the kind of information used by a specific semantic relatedness measure).
Solving word choice problems A word choice problem (Jarmasz and Szpakowicz, 2003;Turney, 2006) consists of a target word and four candidate words or phrases. The objective is to pick the one that is most closely related to the target.
beret a) round cap b) cap with horizontal peak c) wedge cap d) helmet
There is always only one correct candidate, 'a' in this case. The semantic relatedness between the target 'beret' and each of the candidates is computed by a semantic relatedness measure, and the candidate with the maximum relatedness value is chosen. For preprocessing and handling of multiword expressions, we follow the approach outlined in (Zesch et al., 2008b).
If two or more candidates are equally related to the target, the candidates are said to be tied. If one of the tied candidates is the correct answer, the problem is counted as correctly solved, but the corresponding score is reduced. We assign a score s i of 1 # of tied candidates (in effect approximating the score obtained by randomly guessing one of the tied candidates). Thus, a correctly solved problem without ties is assigned a score of 1.
We evaluate the word choice problems using accuracy and coverage. We define accuracy as Acc = S |A| , where S is the sum of all scores s i , and A is the number of word choice problems that were attempted by the semantic relatedness measure. Coverage is then defined as Cov = |A| n , where n is the total number of word choice problems. Accuracy indicates how many of the attempted problems could be answered correctly, and coverage indicates how many problems were attempted. The overall performance of a measure needs to take accuracy and coverage into account, as a measure might get a better coverage by sacrificing accuracy and vice versa. Thus, we define the combined evaluation metric H = 2·Acc·Cov Acc+Cov , i.e. the harmonic mean of accuracy and coverage.
We use a dataset collected by Zesch et al. (2008b). It contains 1008 word choice problems from the January 2001 to December 2005 issues of the German-language edition of Reader's Digest (Wallace andWallace, 2001-2005). As this dataset contains more complex vocabulary and is significantly larger than the available word pair datasets for correlation with human judgment, it is better suited to assess the coverage of a resource.
Results and Discussion
Correlation with Human Judgments Figure 2 shows the obtained coverage on the two datasets using the Wikipedia snapshots. As the Gloss and the LinkVector measure display equal coverage, we combined them in this figure. We find that the ConceptVector measure generally covers more word pairs than the other measures. The ConceptVector measure also displays high initial coverage even when using the quite small first snapshot from 2002. This is due to the special property of the ConceptVector measure that a word is covered, if it is contained in a Wikipedia article. This is in contrast to the other measure types, where the word has to appear as an article title or redirect. As Wikipedia article titles are mainly nouns or noun phrases, the coverage of verbs and adjectives contained in the Gur-350 dataset is limited for the Path, the Gloss, and the LinkVector measure.
The Path measure does not cover any word pairs before the snapshot 2004-2, as it relies on the category system that was not added to Wikipedia until 2004. On the small Gur-65 dataset, the Path and the Gloss measure reach the same coverage as the ConceptVector measure when looking at the more recent snapshots. However, on the larger Gur-350 dataset that contains more domain-specific vocabulary, the ConceptVector measure still has a much higher coverage than the other measures; see Figure 2 (b). For the early snapshots, coverage rises steeply for all measures, while for the recent snapshots only small increases in coverage can be observed. Figure 3 shows the obtained Spearman correlations between the human judgments in the gold standard dataset and the scores computed by the semantic relatedness measures. As correlation values, which are based on a small number of word pairs, are not reliable, we only present them if the coverage reaches at least 30% of the full dataset and at least 20 word pairs are covered. Thus, the lines in the chart which correspond to measure types with a low coverage do not extend over all the snapshots. Note that we generally cannot compare the correlation scores between single measures, as they were obtained on different subsets of the evaluation dataset due to the different coverage of the measures. Thus, the important information in this chart is the behavior of a single measure over time. On the Gur-65 dataset, as shown in Figure 3 (a), the correlation scores obtained by the Path measure differ much between the snapshots. However, these correlation scores are based on a very small number of word pairs, as shown in Figure 2 (a), and are quite unreliable. If we only look at the last four snapshots, where all measure types yield the same high coverage, results get more stable. For the larger Gur-350 dataset, as shown in Figure 3 (b), all measures display almost stable correlation scores without statistically significant differences for the more recent snapshots. This means that Wikipedia's growth does not have significant effects on the task performance.
In the analysis presented above, the Spearman correlation scores are computed using as many word pairs as are covered by a certain snapshot. Thus, the analysis cannot tell us whether the growth of Wikipedia has an influence on the core set of word pairs covered by all snapshots. Thus, we perform an additional analysis where we only use a fixed number of word pairs covered by all snapshots. As we need a sufficient number of initially covered word pairs, we limit our analysis to the Gur-350 dataset and the ConceptVector measure. With this setting, even the initial snapshot from 2002 already covers over 50% of all word pairs in the Gur-350 dataset -cf. Figure 2 (b). Figure 4 visualizes the obtained results: In the beginning, the performance rises from snapshot to snapshot and then stays almost stable without statistically significant differences. This means that even extensive changes like re-structuring, extending and adding articles do not have a significant influence on the performance of the ConceptVector measure on the initially covered word pairs. The ConceptVector measure is remarkably stable, as it draws knowledge from different articles in parallel and is thus not easily influenced by changes restricted to a subset of articles. Figures 5 (a), (b), and (c) compare the four measure types according to accuracy, coverage, and harmonic mean of accuracy and coverage. We find that the accuracy values of the ConceptVector, LinkVector, and Gloss measures are almost stable for later snapshots, while the Path measure shows a falling trend. However, the higher values for the Path measure are unreliable, as they are obtained on snapshots with a very low coverage. Thus, we can conclude that the growth of Wikipedia has almost no effect on its suitability for solving word choice problems. However, it has a positive effect on coverage as shown in Figure 5 (b), but coverage increases between the more recent snapshots are small showing a loglike trend. Note, that in this task the coverage of the Gloss measure is not equal to the coverage of the LinkVector measure (as it was the case for correlation with human judgments). The difference is due to the LinkVector measure, which returns 0 if two articles have no links in common. Zero scores often cause a measure not to attempt to solve a word choice problem, as this provides insufficient information for giving an answer. The Gloss measure only returns 0, if two articles do not share a single word, which happens less often.
Solving Word Choice Problems
The Path measure relying on the Wikipedia category graph only yields coverage comparable to the Gloss or ConceptVector measure when using very recent snapshots. The LinkVector measure generally shows a quite low coverage. As accuracy is almost constant and coverage rises, the combined performance values (H) in Figure 5 (c) are bound to coverage. The results on this task are consistent with the results obtained on the other evaluation task correlation with human judgments: Wikipedia's growth increases the coverage, while the accuracy is stable.
Overall, we can conclude that -as expected -the growth of Wikipedia has a positive effect on coverage. Surprisingly, it has almost no effect on the suitability of Wikipedia as a resource for computing semantic relatedness. Especially for the ConceptVector measure, correlation values and coverage are quite high even for smaller snapshots. Thus, even small language-specific versions of Wikipedia can be used for computing semantic relatedness if there are no developed classical resources for a certain language. If the coverage provided by an older snapshot is already sufficient for a certain task, smaller (and thus computationally less demanding) Wikipedia snapshots can be used without negative effects on the task performance.
Summary
We analyzed the influence of the Wikipedia's growth on the performance of NLP applications using Wikipedia as a knowledge source. As Wikipedia is a structured resource that grows in different ways, we selected the task of computing semantic relatedness for evaluation. The different types of semantic relatedness measures (path based, gloss based, concept vector based, and link vector based) test a wide range of Wikipedia's properties. We evaluated the performance of semantic relatedness using two tasks: correlation with human judgments and solving word choice problems. We created 6-monthly snapshots of the German Wikipedia that are used as knowledge sources for the relatedness measures. Our analysis performed on the German Wikipedia shows that the growth has little effect on the performance of semantic relatedness measures. It rises for the early snapshots providing very low coverage, and then stays stable even for the quite large more recent snapshots. This property, together with the increasing coverage, makes Wikipedia a valuable resource in the context of large-scale NLP applications, where coverage is one of the major criteria for overall performance.
Even if we selected semantic relatedness for evaluation, which directly assesses a wide range of different Wikipedia properties, other natural language processing applications might still display different behavior. Thus, we make the Wikipedia snapshots used in this study available upon request. We hope that this will foster research on the influence of Wikipedia's growth on other NLP tasks. Additionally, we will make the TimeMachine for creating the Wikipedia snapshots publicly available as part of the JWPL tool. 4 In future work, we plan to verify our results using the English Wikipedia (as soon as the required data gets available) and other NLP tasks.
Figure 1 :
1Growth of the German Wikipedia.
Figure 2 :
2Coverage of measure types on the word relatedness datasets.
Figure 3 :
3Correlation of measure types with human judgments.
Figure 4 :
4Performance of the ConceptVector measure using a fixed set of word pairs from the Gur-350 dataset.
Figure 5 :
5Performance of measure types when solving word choice problems.
Table 1 :
1Growth of the German Wikipedia.
http://download.wikimedia.org/
http://www.ukp.tu-darmstadt.de/data/semantic-relatedness/
http://www.ukp.tu-darmstadt.de/software/jwpl/
AcknowledgmentsThis work has been supported by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806. We thank Anouar Haha and Ivan Galkin for implementing the data conversion tool used for creating the Wikipedia snapshots.References
Evaluating WordNet-based Measures of Semantic Distance. Alexander Budanitsky, Graeme Hirst, Computational Linguistics. 321Alexander Budanitsky and Graeme Hirst. 2006. Evalu- ating WordNet-based Measures of Semantic Distance. Computational Linguistics, 32(1):13-47.
Temporal Analysis of the Wikigraph. Luciana Buriol, Carlos Castillo, Debora Donato, Stefano Leonardi, Stefano Millozzi, Proceedings of Web Intelligence. Web IntelligenceHong KongLuciana Buriol, Carlos Castillo, Debora Donato, Stefano Leonardi, and Stefano Millozzi. 2006. Temporal Analy- sis of the Wikigraph. In Proceedings of Web Intelligence, pages 45-51, Hong Kong.
Introduction to the special issue on Computational Linguistics using large corpora. W Kenneth, Robert L Church, Mercer, Computational Linguistics. 191Kenneth W. Church and Robert L. Mercer. 1993. Introduc- tion to the special issue on Computational Linguistics us- ing large corpora. Computational Linguistics, 19(1):1- 24.
WordNet: An Electronic Lexical Database. Christiane Fellbaum, MIT PressCambridge, MAChristiane Fellbaum. 1998. WordNet: An Electronic Lexi- cal Database. MIT Press, Cambridge, MA.
Computing Semantic Relatedness using Wikipedia-based Explicit Semantic Analysis. Evgeniy Gabrilovich, Shaul Markovitch, Proceedings of The 20th International Joint Conference on Artificial Intelligence (IJCAI). The 20th International Joint Conference on Artificial Intelligence (IJCAI)Hyderabad, India; Jeju Island, Republic of KoreaProceedings of the 2nd International Joint Conference on Natural Language ProcessingEvgeniy Gabrilovich and Shaul Markovitch. 2007. Com- puting Semantic Relatedness using Wikipedia-based Ex- plicit Semantic Analysis. In Proceedings of The 20th International Joint Conference on Artificial Intelligence (IJCAI), pages 1606-1611, Hyderabad, India, January. Iryna Gurevych. 2005. Using the Structure of a Concep- tual Network in Computing Semantic Relatedness. In Proceedings of the 2nd International Joint Conference on Natural Language Processing, pages 767-778, Jeju Island, Republic of Korea.
Roget's Thesaurus and Semantic Similarity. Mario Jarmasz, Stan Szpakowicz, Proceedings of Recent Advances in Natural Language Processing (RANLP). Recent Advances in Natural Language Processing (RANLP)Borovets, BulgariaMario Jarmasz and Stan Szpakowicz. 2003. Roget's Thesaurus and Semantic Similarity. In Proceedings of Recent Advances in Natural Language Processing (RANLP), pages 111-120, Borovets, Bulgaria.
Automatic Sense Disambiguation Using Machine Readable Dictionaries: How to tell a pine cone from an ice cream cone. Michael Lesk, Proceedings of the 5th Annual International Conference on Systems Documentation. the 5th Annual International Conference on Systems DocumentationToronto, Ontario, Canada. Olena Medelyan, David Milne, Catherine Legg, and Ian H. Witten67Mining Meaning from WikipediaMichael Lesk. 1986. Automatic Sense Disambiguation Using Machine Readable Dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the 5th Annual International Conference on Systems Documen- tation, pages 24-26, Toronto, Ontario, Canada. Olena Medelyan, David Milne, Catherine Legg, and Ian H. Witten. 2009. Mining Meaning from Wiki- pedia. International Journal of Human-Computer Stud- ies, 67(9):716-754.
A Method for Word Sense Disambiguation of Unrestricted Text. Rada Mihalcea, Dan I Moldovan, Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. the 37th Annual Meeting of the Association for Computational LinguisticsMaryland, USACollege ParkRada Mihalcea and Dan I. Moldovan. 1999. A Method for Word Sense Disambiguation of Unrestricted Text. In Proceedings of the 37th Annual Meeting of the Associa- tion for Computational Linguistics, pages 152-158, Col- lege Park, Maryland, USA, June.
An Effective, Low-Cost Measure of Semantic Relatedness Obtained from Wikipedia Links. David Milne, Ian H Witten, Proceedings of the first AAAI Workshop on Wikipedia and Artificial Intelligence (WIKIAI'08). the first AAAI Workshop on Wikipedia and Artificial Intelligence (WIKIAI'08)Chicago, USADavid Milne and Ian H. Witten. 2008. An Effective, Low-Cost Measure of Semantic Relatedness Obtained from Wikipedia Links. In Proceedings of the first AAAI Workshop on Wikipedia and Artificial Intelligence (WIKIAI'08), pages 25-30, Chicago, USA.
Computing Semantic Relatedness using Wikipedia Link Structure. David Milne, Proceedings of the New Zealand Computer Science Research Student Conference. the New Zealand Computer Science Research Student ConferenceHamilton, New ZealandDavid Milne. 2007. Computing Semantic Relatedness us- ing Wikipedia Link Structure. In Proceedings of the New Zealand Computer Science Research Student Conference (NZCSRSC 2007), Hamilton, New Zealand.
Non-Classical Lexical Semantic Relations. Jane Morris, Graeme Hirst, Workshop on Computational Lexical Semantics, Human Language Technology Conference of the North American Chapter of the ACL. BostonJane Morris and Graeme Hirst. 2004. Non-Classical Lexi- cal Semantic Relations. In Workshop on Computational Lexical Semantics, Human Language Technology Con- ference of the North American Chapter of the ACL, pages 46-51, Boston.
Using WordNet Based Context Vectors to Estimate the Semantic Relatedness of Concepts. Kotaro Nakayama, Takahiro Hara, Shohiro Nishio, Proceedings of the EACL 2006 Workshop Making Sense of Sense -Bringing Computational Linguistics and Psycholinguistics Together. the EACL 2006 Workshop Making Sense of Sense -Bringing Computational Linguistics and Psycholinguistics TogetherNancy, France; Trento, ItalyProceedings of International Conference on Web Information Systems Engineering (WISE)Kotaro Nakayama, Takahiro Hara, and Shohiro Nishio. 2007. Wikipedia Mining for an Association Web The- saurus Construction. In Proceedings of International Conference on Web Information Systems Engineering (WISE), pages 322-334, Nancy, France, December. Siddharth Patwardhan and Ted Pedersen. 2006. Using WordNet Based Context Vectors to Estimate the Seman- tic Relatedness of Concepts. In Proceedings of the EACL 2006 Workshop Making Sense of Sense -Bringing Com- putational Linguistics and Psycholinguistics Together, pages 1-8, Trento, Italy.
Knowledge Derived from Wikipedia for Computing Semantic Relatedness. Paolo Simone, Michael Ponzetto, Strube, Journal of Artificial Intelligence Research. 30Simone Paolo Ponzetto and Michael Strube. 2007. Knowl- edge Derived from Wikipedia for Computing Semantic Relatedness. Journal of Artificial Intelligence Research, 30:181-212.
Development and Application of a Metric on Semantic Nets. Roy Rada, Hafedh Mili, Ellen Bicknell, Maria Blettner, IEEE Trans. on Systems, Man, and Cybernetics. 191Roy Rada, Hafedh Mili, Ellen Bicknell, and Maria Blet- tner. 1989. Development and Application of a Metric on Semantic Nets. IEEE Trans. on Systems, Man, and Cy- bernetics, 19(1):17-30.
Contextual Correlates of Synonymy. Herbert Rubenstein, John B Goodenough, Communications of the ACM. 810Herbert Rubenstein and John B. Goodenough. 1965. Con- textual Correlates of Synonymy. Communications of the ACM, 8(10):627-633.
Introduction to Modern Information Retrieval. Gerard Salton, Michael J Mcgill, McGraw-HillNew YorkGerard Salton and Michael J. McGill. 1983. Introduction to Modern Information Retrieval. McGraw-Hill, New York.
Expressing Implicit Semantic Relations without Supervision. Peter Turney, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACLSydney, AustraliaPeter Turney. 2006. Expressing Implicit Semantic Rela- tions without Supervision. In Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the ACL, pages 313-320, Sydney, Australia.
Reader's Digest, das Beste für Deutschland. Dewitt Wallace, Lila Acheson Wallace, Verlag Das BesteStuttgartDeWitt Wallace and Lila Acheson Wallace. 2001-2005. Reader's Digest, das Beste für Deutschland. Jan 2001- Dec 2005. Verlag Das Beste, Stuttgart.
Wisdom of Crowds versus Wisdom of Linguists -Measuring the Semantic Relatedness of Words. Torsten Zesch, Iryna Gurevych, Journal of Natural Language Engineering. 161Torsten Zesch and Iryna Gurevych. 2010. Wisdom of Crowds versus Wisdom of Linguists -Measuring the Se- mantic Relatedness of Words. Journal of Natural Lan- guage Engineering, 16(1):25-59, January.
Comparing Wikipedia and German Wordnet by Evaluating Semantic Relatedness on Multiple Datasets. Torsten Zesch, Iryna Gurevych, Max Mühlhäuser, Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2007). Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2007)Rochester, NY, USATorsten Zesch, Iryna Gurevych, and Max Mühlhäuser. 2007. Comparing Wikipedia and German Wordnet by Evaluating Semantic Relatedness on Multiple Datasets. In Proceedings of Human Language Technologies: The Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL-HLT 2007), pages 205-208, Rochester, NY, USA.
Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary. Torsten Zesch, Christof Müller, Iryna Gurevych, Proceedings of the Conference on Language Resources and Evaluation (LREC). the Conference on Language Resources and Evaluation (LREC)Marrakech, Moroccoelectronic proceedingsTorsten Zesch, Christof Müller, and Iryna Gurevych. 2008a. Extracting Lexical Semantic Knowledge from Wikipedia and Wiktionary. In Proceedings of the Con- ference on Language Resources and Evaluation (LREC), Marrakech, Morocco. electronic proceedings.
Using Wiktionary for Computing Semantic Relatedness. Torsten Zesch, Christof Müller, Iryna Gurevych, Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence. the Twenty-Third AAAI Conference on Artificial IntelligenceChicago, IL, USATorsten Zesch, Christof Müller, and Iryna Gurevych. 2008b. Using Wiktionary for Computing Semantic Re- latedness. In Proceedings of the Twenty-Third AAAI Conference on Artificial Intelligence, pages 861-867, Chicago, IL, USA. |
15,221,229 | ISSUES IN DESIGN AND COLLECTION OF LARGE TELEPHONE SPEECH CORPUS FOR SLOVENIAN LANGUAGE | In this paper, different issues in design, collection and evaluation of the large vocabulary telephone speech corpus of Slovenian language are discussed. The database is composed of three text corpora containing 1530 different sentences. It contains read speech of 82 speakers where each speaker read in average more than 200 sentences and 21 speakers read also the text passage of 90 sentences. The initial manual segmentation and labeling of speech material was performed. Based on this the automatic segmentation was carried out. The database should facilitate the development of speech recognition systems to be used in dictation tasks over the telephone. Until now the database was used mostly for isolated digit recognition tasks and word spotting. | [] | ISSUES IN DESIGN AND COLLECTION OF LARGE TELEPHONE SPEECH CORPUS FOR SLOVENIAN LANGUAGE
Zdravko Dþlþbogomir Horvat
Faculty of Electrical Engineering and Computer Science
University of Maribor
Smetanova 172000MariborSlovenia
R University of Maribor
Research and Study Centre Razlagova 222000MariborSlovenia
Aleksandra Z5gling sandra.zogling@uni-mb.si
R R
ISSUES IN DESIGN AND COLLECTION OF LARGE TELEPHONE SPEECH CORPUS FOR SLOVENIAN LANGUAGE
In this paper, different issues in design, collection and evaluation of the large vocabulary telephone speech corpus of Slovenian language are discussed. The database is composed of three text corpora containing 1530 different sentences. It contains read speech of 82 speakers where each speaker read in average more than 200 sentences and 21 speakers read also the text passage of 90 sentences. The initial manual segmentation and labeling of speech material was performed. Based on this the automatic segmentation was carried out. The database should facilitate the development of speech recognition systems to be used in dictation tasks over the telephone. Until now the database was used mostly for isolated digit recognition tasks and word spotting.
Introduction
Design and collection of large telephone speech databases requires the compromise between large number of speakers with the limited amount of speech material per speaker or less speakers with much more speech material per speaker. Recently the SpeechDat project was accomplished with a number of databases recorded that contain large number of speakers (1000 or 5000 speakers) with approximate 5 minutes of speech material per speaker (H5ge et al., 1997). The databases created achieved a good coverage of dialect, gender, and different call environments and surely represent a solid basis for development of various voice driven teleservices.
However, the lack of more speech data per speaker aggravates development and especially evaluation of systems in development phase for applications like dictation over the telephone. For development of applications like automatic telephone assistant with email reading and writing capabilities via voice, large vocabulary telephone speech databases with large telephone speech material per speaker would facilitate the development. Such databases could help especially in the pre-evaluation phase of the developed speech recognition systems.
Goals of the database design
The speech database SNABI was designed as a large corpus speech database to meet the development needs mentioned above. The requirements for the database was that it should contain on one side sufficient number of speakers to allow research in speaker independent speech recognition field and on the other to contain enough speech material per speaker to allow research in speech dictation task. The developed database should in this way facilitate development of speaker independent telephone speech recognition systems for wide range of applications including also speech dictation task. The database should therefore consists of different corpuses (isolated words, sentences from different domains, text passage) to meet these requirements.
Corpora of the database
The text corpus defined for the recordings was divided into a corpus of isolated words, number strings, and alphabet, general purpose text corpus (lingua), a task specific text corpus (MMC) as well as text passage. Table 1 shows an overview of the corpuses used. Table 1: Overview of the text corpora used in recording the database The text corpus lingua consists of phonetically rich sentences and is aimed to cover wide range of speech telephone applications integrating also the dictation task. The MMC corpus was designed with the aim to cover different levels of spoken phenomena in particular constrained domain (office automation, railway information). The average length of sentences in the MMC corpus is 7.5 words, in corpus lingua 9 words and in the passage 15 words.
Nr
Database structure
The database contains speech and corresponding annotation files that are stored in a specified directory structure. The directory structure is a content dependent directory structure, where files are organized according to the corpora used and further to sub-corpora. At the highest level the structure consists directories: words, lingua, and mmc. At the lowest level there are directories for corresponding sub-corpora (for example, lingua 1, lingua 2, lingua 3, and lingua 4, for the lingua corpus). Every directory at this level contains the speech material of all the speakers that read the text of the sub-corpus. The directory named /doc contains all the documentation of the database. The documentation consists of table of the SAMPA symbols used in phonetic transcription of the text, lexicon, speaker information table, and ISO88592 table. Filenames follows the ISO 9660 file name conventions (8 plus 3 characters) according to the main CD-ROM standard. As it is useful for the user to be able to determine the content of the speech file by looking at the filename, the file name consists of different codes denoting speaker, recording type, corpus name, item ID, and file type. The following template is used:
XX Y V ZZZZ. NL M where: XX -denotes speaker ID code Y -code of the sub-lexicon used (W -words, N -numbers, A -alphabet, Snumber string, M -MMC corpus, L -lingua corpus, P -passage) V -type of recording (telephone) ZZZZ -sentence ID number N -type of pronunciation (W -isolated word, Ssentence) L -language M -file type (S -signal, W -segmentation and labeling on word level, P -segmentation and labeling on phone level) For the signal and format the SAM format was used (Tomlinson et. al., 1988) and signal and data are therefore written in different files. Defining the structure of annotation file the SPEECHDAT format of annotation files was also considered (Draxler, 1998). In this way it is possible to update the annotation files with additional information not being limited with the header length as in case where a speech signal file has a header of fixed length. To each speech signal one or two annotation files are associated. Both files differ only in segmentation and labeling data. One file contains segmentation and labeling data for word level segmentation and annotation and the other for phoneme level. An example of an annotation file is given on Figure 1. Place of residence during the first years of elementary school 7
Place of residence during the longest period of life 8
Dialect region of parents (1 -10) 9
Profession 10
Size in cm 11
Weight in kg 12
Reading activity: P -frequent, Vmodest, R -seldom 13
Speech activity: Z -very active, Aactive, M -moderate, R -seldom 14
Sickness (e.g., asthma) 15
Stimulants (e.g. cigarettes, coffee, alcohol) 16
Degree of funk during recording
Speaker selection and database recording
In contrast with the small vocabulary telephone speech databases that consists of hundreds or thousands of speakers, the selection of speakers in case of large vocabulary speech databases with several tenths of speakers can be more deliberate. In spite of that we did not defined any special criteria according to which the speaker selection should be made. The goal in speaker selection was to achieve good balance according to gender and age categories. The majority of speakers were students and employees of the university. Speakers were from different dialect regions although we did not cover all the main dialect regions in Slovenia. During the recording the speakers were not instructed to use standard pronunciation but rather to use their usual pronunciation.
All together 82 speakers were recorded. From these 31 were female and 51 male speakers. Majority of the speakers were around 21 years old, the youngest was 16 years and the oldest 62 years old. Six out of ten main dialect regions in Slovenia were covered by speakers.
All recordings were done in a normal office acoustic environment avoiding exaggerated presence of sounds like door slam, background music or cross talk, paper rustle, etc. The speakers read text from specially designed booklets with one item (word, letter, digit string or sentence) per page. In this way the speakers were able to concentrate only on the item uttered. They also did not try to hurry reading the text.
The speakers read different groups of sub-corpora making short pauses after each sub-corpus. The isolated words, alphabet, and set of digit strings were considered as sub-corpora during the recording.
Each speaker uttered in average 200 sentences, 80 isolated words, containing also digits, 20 digit strings and alphabet. From 82 speakers, 21 speakers uttered 450 sentences and 20 speakers also read the passage of 90 thematically connected sentences (simulating a voice dictation task). The database contains 21.416 recordings of sentences, 5760 isolated words and 5280 recordings of digits in digit strings. The total database consists of approx. 25 hours of telephone speech.
The database was recorded in a time span of 3 years. The calls were made over the analogue and digital telephone lines. The speech was first recorded with DAT recorder and then transferred over the digital connection (using DAT link) to workstation, where it was stored with 16 kHz sampling rate and 16 bit linear quantization with MSB-LSB byte order.
Segmentation
As the speech was originally recorded with DAT recorder the first segmentation was done on a word level during the data transfer to speech files on digital computer. The transfer and segmentation was done with proprietary software developed at the University of Maribor. During this process all the speech material was also manually checked.
For manual segmentation and labelling on phonemic level a proprietary software tool was used that was also developed at the University of Maribor. For labeling the MRPA alphabet for Slovenian language was used (MRPA, 1999). The developed tool allows positioning and inserting the segment borders. It further enables labeling of defined segments using the IPA symbols, which are transformed to appropriate MRPA symbols when the segments and labels are saved to a file. When the segments and labels are read from a file the MRPA symbols are converted to the corresponding IPA symbols, which are then displayed on the screen as labels. The tool also enables listening to individual segments or to an arbitrary selected segment of speech signal.
Part of the database was manually segmented and labelled at phonemic level. The rest of the database was automatic segmented with the HTK toolkit using the manually segmented speech material.
While developing system many different speech extraction methods have been tested. Results presented in this paper are obtained for the frame rate of 5 ms using 20 ms window lenght. Speech was analysed with melfrequency cepstral coefficients, using also and coefficients and energy.
A simple 3 state left-right continuous density HMM models were used in which each observation probability distribution was represented by Gaussian density.
For segmentation evaluation purposes 300 randomly selected sentences were used for HMM training and a set of 400 randomly selected sentences were used for test. There was no overlap between the two sets.
The segmentation error was defined as a difference between automatically determined and manually placed segment boundaries.
The segmentation error can be quantitatively expressed by counting an automatic boundary positioning as correct if it falls within a given margin (the so-called "correct margin") of the reference segmentation. The number of correctly positioned boundaries divided by the total number of boundaries then gives the segmentation accuracy (Pauws, 1996). Figure 4 shows the preliminary results of speech segmentation accuracy. Currently the work on increasing the segmentation accuracy is being performed.
An iterative procedure is planned where part of the automatic segmented material is manually verified and added to the existing manually checked material. This is then used as a basis for new iteration of the automatic segmentation procedure. The process will be stopped when the verification of automatic segmented material will show sufficient accuracy.
Database evaluation
The database was used till now in several experiments on automatic speech recognition. Mostly the tasks were isolated digit recognition and word spotting. In (Bub, 1997) the database was used in multilingual speech recognition experiment. In isolated digit recognition task reported in (Imperl, 1997) a comparative study of continuous density hidden Markov models and semicontinuous hidden Markov models was performed. In the implemented speech recognition system cepstral features were used, expandend with dynamic features ( and coefficients) and diphone modeling was performed using the Laplacean probability density function. The experiments were performed for the SNABI and VoiceMail (for German language) speech databases. On the isolated digit recognition task, using two monolingual speech recognition systems with the same structure (for Slovenian and German languages), average recognition accuracy of 95.1% was obtained for the VoiceMail database and 98.6% for the SNABI database. The database was also used in the word spotting tasks (Kaiser, 1997, Kaiser 1997a and in acoustic modeling with genetic algorithms (Kaiser, 1998).
Conclusions
The presented speech database is intended to facilitate development of telephone continuous speech recognition systems used in applications that would integrate the speech dictation task. The database was already used in various speech recognition tasks that mainly included isolated digit recognition and word spotting. The preliminary segmentation was performed and the iterative procedure with manual crosscheck of the automatic segmentation accuracy is foreseen to achieve higher segmentation reliability. In the future the evaluation of other parts of the database is foreseen, especially for continuous speech recognition and speech dictation tasks.
Figure 1 .Figure 2 .Figure 3 .
123Sample annotation file The annotation file shown on Fig. 1 is the same for word level and phoneme level phonetic transcription. Sample annotation file -part with word level transcription Figure 2 shows an example of the annotation file for word level phonetic transcription, whereas Figure 3 shows an example for phoneme level transcription. Sample annotation file -part with phoneme level transcription
Figure 4 :
4Preliminary results of database segmentation accuracy
Table 1
1summarises the content of the speaker information table with additional information about speaker background.Nr.
Description
1
Speaker ID code
2
Gender
3
Dialect regions: 1 -Štajersko
Table 1 :
1Speaker background information written in speaker information tableThe lexicon is fairly simple as each entry consists of the orthographic form and of the phonetic transcription in the MRPA alphabet(MRPA, 1999).
Imperl 1997 In-service adaptation for multilingual hidden Markov models. U Bub, J K5hler, B , 97MunichBub, U., J., K5hler, B., Imperl 1997 In-service adaptation for multilingual hidden Markov models. In ICASSP´97, Munich.
. C Draxler, H Van Den Heuvel, H S Tropf, Draxler, C., H. van den Heuvel, H. S. Tropf . (1998).
SpeechDat Experiences in Creating Large Multilingual Speech Databases for Teleservices. LREC Proceedings. 1SpeechDat Experiences in Creating Large Multilingual Speech Databases for Teleservices. LREC Proceedings, 1: 361-366.
European Speech Databases for Telephone applications. H H5ge, H S Tropf, R Winski, H Van Den Heuvel, R Haeb-Umbach, & K Choukri, 97MunichH5ge, H., H. S. Tropf, R. Winski, H. van den Heuvel, R. Haeb-Umbach & K. Choukri 1997. European Speech Databases for Telephone applications. In ICASSP´97, Munich.
DþLþ -.5hler, 1997. Isolated word recognition over the telephone using the Semicontinuous HMM. B Imperl, Z , Proceed. Electrotechnical and Computer Science Conference. eed. Electrotechnical and Computer Science ConferenceImperl, B., Z. .DþLþ -.5hler, 1997. Isolated word recognition over the telephone using the Semi- continuous HMM. In Proceed. Electrotechnical and Computer Science Conference.
Word spotting in the telephone dialogue systems. J Kaiser, Proceed. Advances in Speech Technology. eed. Advances in Speech TechnologyMariborKaiser, J., 1997. Word spotting in the telephone dialogue systems. In Proceed. Advances in Speech Technology, Maribor.
DþLþ D Word spotting using phonetic fillers. J Kaiser, Z , Proceed. Electrotechnical and Computer Science Conference. eed. Electrotechnical and Computer Science ConferenceKaiser, J., Z., .DþLþ D Word spotting using phonetic fillers. In Proceed. Electrotechnical and Computer Science Conference.
DþLþ Training of hidden Markov models with genetic algorithms. J Kaiser, Z , Proceed. Electrotechnical and Computer Science Conference. eed. Electrotechnical and Computer Science ConferenceKaiser, J., Z., .DþLþ Training of hidden Markov models with genetic algorithms. In Proceed. Electrotechnical and Computer Science Conference.
A hierarchical method of automatic speech segmentation for synthesis applications. S , Y Kamp, L Willems, SAMPA for Slovenian Pauws. 19MRPAMRPA, (1999), http/::www.phon.ucl.ac.uk/home/sampa/ sloven-uni.html, SAMPA for Slovenian Pauws, S., Y., Kamp, L., Willems, 1996. A hierarchical method of automatic speech segmentation for synthesis applications, Speech Communication 19 , p. 207-220
Label file format proposal. M Tomlinson, R Winski, W Barry, Esprit project 1542 (SAM): Extension Phase, Final ReportTomlinson, M., R. Winski, W. Barry 1988. Label file format proposal. Esprit project 1542 (SAM): Extension Phase, Final Report. |
16,222,374 | Focusing Annotation for Semantic Role Labeling | Annotation of data is a time-consuming process, but necessary for many state-of-the-art solutions to NLP tasks, including semantic role labeling (SRL). In this paper, we show that language models may be used to select sentences that are more useful to annotate. We simulate a situation where only a portion of the available data can be annotated, and compare language model based selection against a more typical baseline of randomly selected data. The data is ordered using an off-the-shelf language modeling toolkit. We show that the least probable sentences provide dramatic improved system performance over the baseline, especially when only a small portion of the data is annotated. In fact, the lion's share of the performance can be attained by annotating only 10-20% of the data. This result holds for training a model based on new annotation, as well as when adding domain-specific annotation to a general corpus for domain adaptation. | [
8589745,
890946,
8275542
] | Focusing Annotation for Semantic Role Labeling
Daniel Peterson daniel.w.peterson@colorado.edu
University of Colorado
Martha Palmer mpalmer@colorado.edu
University of Colorado
Shumin Wu shumin.wu@colorado.edu
University of Colorado
Focusing Annotation for Semantic Role Labeling
SemanticsLanguage ModelingAnnotationSemantic Role LabelingDomain Adaptation
Annotation of data is a time-consuming process, but necessary for many state-of-the-art solutions to NLP tasks, including semantic role labeling (SRL). In this paper, we show that language models may be used to select sentences that are more useful to annotate. We simulate a situation where only a portion of the available data can be annotated, and compare language model based selection against a more typical baseline of randomly selected data. The data is ordered using an off-the-shelf language modeling toolkit. We show that the least probable sentences provide dramatic improved system performance over the baseline, especially when only a small portion of the data is annotated. In fact, the lion's share of the performance can be attained by annotating only 10-20% of the data. This result holds for training a model based on new annotation, as well as when adding domain-specific annotation to a general corpus for domain adaptation.
Introduction
Annotation of data is a time-consuming process, but necessary for supervised machine learning approaches. Most state-of-the-art solutions to NLP tasks, including semantic role labeling (SRL), are driven by supervised machine learning algorithms. This requires a large amount of annotation to be performed by humans, often by speciallytrained linguists. Unfortunately, there are not enough annotation hours available to annotate large amounts of data in every potential domain, and so there is considerable attention paid to increasing the usefulness of annotation efforts. Approaches range from one-shot data ranking approaches (Dligach and Palmer, 2009), where there is no need to iterate between annotation and training, to active learning systems (Lewis and Gale, 1994), where the system is supplied with annotation for the examples it is least confident in, to a combination of these two approaches (Dligach and Palmer, 2011). The foremost approach is taken here. There are a few advantages to using a one-shot data ranking approach. First, it is simple -find out how much data can be annotated given the resources available, and select that much data to annotate. Second, it makes good use of the annotators' time. Active learning is built on the premise of a back-and-forth iteration between training a model and annotation, and any delays associated with training are realized in lost productivity. Third, as will be shown, it takes only a small portion of the data to get the lion's share of the performance.
Contributions
The contributions of this paper are two-fold. First, it demonstrates that language models may be used to select a subset of sentences for annotation, outperforming random selection by a considerable margin on the semantic role labeling task. Second, it shows that this result holds when selecting annotation for adapting a general model to a new domain.
Semantic Role Labeling
The semantic role labeling task is recognizing and labeling semantic arguments of a predicate. Typical semantic arguments include Agent, Patient, Theme, etc. and also adjunctive arguments indicating time, location, manner, etc. Of the many semantic representations (FrameNet, Verb-Net, etc), PropBank (Palmer et al., 2005) is the most popular for supervised machine learning approaches because of the wealth of human-annotated corpora. PropBank is layered on top of a constituent-based syntactic parse (Penn Treebank). It annotates verb predicates (and more recently, nominal predicates, adjective predicates, and lightverb constructions) (Bonial et al., 2014) and uses a set of core (numbered) argument and adjunct argument labels on the constituents. ARG0 typically identifies the Agent, while ARG1 represents Patient or Theme. PropBank semantic roles are used in this work. A few example sentences with labeled arguments can be found in Table 1.
(ARG0 John) ate (ARG1 the fish.) (ARG1 The window) broke. (ARG0 Kate) threw (ARG1 the ball) (ARG2 over the plate.)
Language Modeling for Data Selection
In Dligach and Palmer (2009), it was shown that using only a portion of the training data available was useful for the word sense disambiguation (WSD) task. Because in WSD, most of the training examples are of the most-frequent sense, it is difficult to train accurate models for infrequent senses. The data was ordered using probabilistic language models, from the least probable sentences to the most. The intuition behind this ordering is that low-probability sentences are more likely to contain the low-probability senses of a particular word. This heuristic provided a more balanced training set, with more examples of the infrequent senses relative to the frequent ones. In Dligach and Palmer (2011), this approach was coupled with active learning. The least probable 10% of the available data was treated as a "seed" set for a standard ac-tive learning paradigm. A model was trained to perform WSD on the seed set, and then run on the remaining available data. It reported which examples the classifier had the least confidence in. These examples were added, and the model was re-trained, in an iterative fashion. The best performance was achieved using only 15% of the total data for training. This "least probable" seed data significantly outperformed randomly selected seed data, when the same active learning procedure was followed afterward. Using the low-probability sentences ensured that the seed set contained examples of low-probability word senses, that may not be selected by a random seed set. In this paper, we test whether the same technique may be applicable to the SRL task. Intuitively, the most unusual sentences are more likely to contain the low-probability structures that are important to include in the SRL training data. Uncommon arguments or unusual grammatical structures are likely to appear in low-probability sentences. To organize the sentences, we use the SRI Language Modeling Toolkit (SRILM) (Stolcke, 2002), a free, off-the-shelf toolkit. We trained N-gram language models on our annotated data, and then used those language models to compute the probability of each sentence. This probability score is used to rank sentences from least to most probable. We do not explore active learning in the initial iteration of this system, but this is likely to provide additional benefit.
Data Selection for SRL
The experiment was run using ClearSRL (Wu and Palmer, 2011), a state-of-the-art semantic role labeling system, with off-the-shelf settings. This package uses the LIBLINEAR (Fan et al., 2008) classifier for identifying and labeling each argument, relying on constituency-parsed sentences. A corpus of manually-annotated data, roughly 150,000 words, was selected for training. These sentences were ordered using SRILM, off-the-shelf, from least probable to most probable. Testing was carried out on another section of samedomain data, roughly 53,000 words, unseen in training. The examples in Table 2 show representatives of lowprobability and high-probability sentences in the data. In general, the very high-probability sentences are simpler and shorter sentences, with only one or two semantic participants and a single clause. Compound and complex sentences are much harder to label accurately, and in general these sentences have a lower probability.
Training a model on annotated data
We trained several models, each on only a portion of the available annotated training data. Because the sentences were organized from least to most probable, we could select the "least-probable n%" of the data. For comparison, we also trained models on randomly-selected data, and the "most-probable n%" of the data. On all selection criteria, started with 10% of the data, and added more in 10% increments. The results are summarized in Figure 1, and the data is also in Table 3. It is clear that the least-probable sentences are more valuable for training. In this simple-to-implement paradigm, the first 10% of the data provides a great deal of the overall performance of the system, beating a system trained on Low probability sentences "We can force him to produce the poppies illegally and feed into the illegal drug market or we can buy the poppies from him and help provide pain-killing drugs." "No restaurant I've worked in (and there have been quite a few, ranging from Subway to fine dining) would have found that kind of language acceptable, especially within earshot of customers." "A consultant in the newspaper article claims that about $12M in private donations is needed annually to support a performance center like Overture, and that a metropolitan area of our size and demographics can be expected to generate about $5M." High probability sentences "The law regarding musical copyrights are clear." "I think they should keep the monarchy." "I have a question." "Keep government away from the internet." Table 2: Example sentences a randomly-selected 30%. Classification accuracy does increase as each successive section of training data is added, but there is only a 3% difference in F1 score between the model trained on the least probable 10% of the sentences, and the model trained on all sentences.
Domain adaptation
Because SRL is a well-studied task, there is annotated data available for training. However, if this training data is not similar enough to the target domain, a general model may be unsuitable. In this circumstance, domain adaptation is appropriate. We test a general model trained on OntoNotes (Weischedel et al., 2011), a roughly 1.5 million word corpus of annotated sentences from the Wall Street Journal, broadcast news, newswire text, and several other domains. To this annotated data, we add in annotations from the "least-probable n%" of the data, as above, and retrain the model on all the selected data. The results are summarized in Figure 2. Again we include comparison against randomly-selected data (the same random selection as before), and the "most-probable" data. The case for low-probability sentences as intelligent training examples is again quite pronounced, but it is worth making a few remarks. First, the baseline model trained on OntoNotes outperforms the best model from the previous step. This is reasonable, given the OntoNotes corpus is an order of magnitude larger. Adding in domain-specific annotation does increase this performance, but the total gain in F1 score from adding the new corpus amounts to only about 1%. Over 40% of the available gain is achieved using only the least-probable 10% of the new data. Exact figures can be found in Table 4. In these experiments, almost all selection paradigms show a monotonic increase in performance as data is added, so it may seem strange that the trend breaks in domain adaptation, as illustrated clearly in Figure 2. When we add leastprobable 70%, 80%, and 100% of the in-domain data, the performance drops.This irregularity may be dependent on the particular corpus, but seems to occur only when most Table 3: Scores for SRL models trained on various portions of the training data. LP refers to models where the least probable data was used, RND means random data was selected, MP means the most probable data was used. ALL means all data was used; this result is constant regardless of which data selection paradigm is followed.
of the data is annotated and added in. The goal of this work is to dramatically reduce the annotation load in a simple one-shot ranking, so even with this oddity the main result remains the same. A more thorough investigation is left for future work.
Results
The results in this study are promising. Although for oneshot data ranking, performance generally increases as we add more data, there is a strong desire to reduce the amount of annotation required. We demonstrate that low probability in a language model sense is worthwhile as a proxy for usefulness of data points as training examples. This suggests that the method of Dligach and Palmer (2011), that couples this language model ranking with active learning, can be applied to SRL. Because compound sentences are likely to be lowprobability, it is possible that some of the benefit from selecting low-probability senses is a direct result of the additional number of training clauses. However, if this were the only benefit to selecting the low-probability sentences, it is unlikely that there would be such an improvement Table 4: Scores for SRL models trained on OntoNotes data, plus various portions of the domain-specific training data. LP refers to models where the least probable data was used, RND means random data was selected, MP means the most probable data was used. ON means OntoNotes was used, and ALL means all domain-specific data was used; these results are constant regardless of which data selection paradigm is followed.
over randomly-selected data. We require three times the amount of annotated data to get the same performance from random selection; to get this result only from extra annotated clauses, we would have to expect three times as many clauses per low-probability sentence as there are in average sentences in the corpus. This may account for some of the difference, but there is still a clear advantage to using language model selection.
Future Directions
There has been work on active learning in the SRL task (Chen et al., 2011). In (Dligach and Palmer, 2011), active learning showed a considerable benefit from using language modeling to select the seed set of sentences for WSD.
Here, the performance increase from language model selection is so drastic, it seems that this result is likely to hold for the SRL task. Along this line, Pradhan et al. (2005) has successfully applied active sampling to the SRL task. While this differs from active learning in that the labels are Figure 1: Overall SRL performance as the amount of available training data increases. The data is added from least probable to most probable sentences. Figure 2: Overall SRL performance on domain adaptation. The base model is always included, and we include portions of the domain-specific training data. The data is added from least probable to most probable sentences.
already known, it demonstrates that, for automatic SRL, a small set of training data can generate a high quality model, especially when combined with the SVM classifier (where only the "support" samples affect the model produced by the learning algorithm). In addition to providing direction for active learning, the work in (Chen et al., 2011) suggests another compelling experiment, even if an active learning paradigm is not used. The authors use collapsed dependency trees to estimate the "representativeness" of particular training sentences, which balances the tendency of active learning systems to overfit to outlier data points. This same technique could be coupled with language model selection, to hopefully produce an even higher-quality selection of initial data. There does seem to be a real increase in performance when the lowprobability sentences are selected, but at least some of these sentences will be low-probability because they are not representative of the data as a whole. This could introduce a bias, that the current experiment does not provide an adequate test for. Further, it was noted that low-probability sentences are more likely to be long and complex constructions. Compound sentences provide multiple example clauses to train on, per sentence. At least some of the performance increase in this paper is likely to come from these extra examples. Also, these complex sentences take longer to annotate than shorter constructions. However, the low-probability sentences also contain a larger-than-average percentage of rare arguments and modifiers (ARGM-TMP, for example, which adds time information to a clause). Although it will require a new annotation effort, it is certainly worth investigating the benefit of language model selection in terms of performance per hour of completed annotation. Based on the results in this paper, it is quite reasonable to expect that language model selection is still a useful ranking scheme to select data for annotation.
Conclusions
The experiments in this paper demonstrate that preselecting data using sentence probabilities is promising for the SRL task, in addition to the WSD task. The model performs better on limited training data when this heuristic is employed. Although the results in this paper are only a pilot study, they are quite promising; a larger investigation with more data and more varied domains is justified. Also, further experiments could strengthen the results by demonstrating how much time can be saved with this approach, instead of looking only at how many sentences can be skipped. Intelligent data selection is not necessarily solved by this approach, and other selection criteria may also be useful to explore. In particular it is worth exploring this technique in conjunction with active learning, like in Dligach and Palmer (2011).
Table 1 :
1Example sentences with semantic role labels. Relations are in bold.
AcknowledgementsWe gratefully acknowledge the support of DARPA HR0011-11-C-0145 (via LDC) BOLT. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of DARPA.
Enhancing active learning for semantic role labeling via compressed dependency trees. C Bonial, J Bonn, K Conger, J Hwang, M ; Palmer, C Chen, A Palmer, C Sporleder, Proceedings of the 9th edition of the Language Resources and Evaluation Conference. the 9th edition of the Language Resources and Evaluation ConferenceReykjavik, IcelandProceedings of International Joint Conference on Natural Language ProcessingBonial, C., Bonn, J., Conger, K., Hwang, J., and Palmer, M. (2014). Propbank: Semantics of new predicate types. In Proceedings of the 9th edition of the Language Re- sources and Evaluation Conference, Reykjavik, Iceland. Chen, C., Palmer, A., and Sporleder, C. (2011). Enhancing active learning for semantic role labeling via compressed dependency trees. In Proceedings of International Joint Conference on Natural Language Processing.
Using language modeling to select useful annotation data. D Dligach, M Palmer, Proceedings of the Student Research Workshop and Doctoral Consortium Held in Conjunction with NAACL-HLT. the Student Research Workshop and Doctoral Consortium Held in Conjunction with NAACL-HLTBoulder, CODligach, D. and Palmer, M. (2009). Using language mod- eling to select useful annotation data. In Proceedings of the Student Research Workshop and Doctoral Consor- tium Held in Conjunction with NAACL-HLT, Boulder, CO.
Good seed makes a good crop: Accelerating active learning using language modeling. D Dligach, M Palmer, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. the 49th Annual Meeting of the Association for Computational LinguisticsPortland, ORDligach, D. and Palmer, M. (2011). Good seed makes a good crop: Accelerating active learning using language modeling. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics, Portland, OR.
Liblinear: A library for large linear classification. R E Fan, K W Chang, C J Hsieh, X R Wang, C J Lin, Journal of Machine Learning Research. Fan, R. E., Chang, K. W., Hsieh, C. J., Wang, X. R., and Lin, C. J. (2008). Liblinear: A library for large linear classification. Journal of Machine Learning Research.
A sequential algorithm for training text classiers. D Lewis, W Gale, Proceedings of the ACM SIGIR Conference on Research and Development in Information Retrieval. the ACM SIGIR Conference on Research and Development in Information RetrievalACM/SpringerLewis, D. and Gale, W. (1994). A sequential algorithm for training text classiers. In Proceedings of the ACM SIGIR Conference on Research and Development in Informa- tion Retrieval. ACM/Springer.
The proposition bank: A corpus annotated with semantic roles. M Palmer, D Gildea, P Kingsbury, Computational Linguistics Journal. 31Palmer, M., Gildea, D., and Kingsbury, P. (2005). The proposition bank: A corpus annotated with semantic roles. Computational Linguistics Journal, 31.
Semantic role chunking combining complementary syntactic views. S Pradhan, K Hacioglu, W Ward, J H Martin, D Jurafsky, Proceedings of the Ninth Conference on Computational Natural Language Learning. the Ninth Conference on Computational Natural Language LearningPradhan, S., Hacioglu, K., Ward, W., Martin, J. H., and Jurafsky, D. (2005). Semantic role chunking combining complementary syntactic views. In Proceedings of the Ninth Conference on Computational Natural Language Learning.
OntoNotes: A Large Training Corpus for Enhanced Processing. A Stolcke, Denver, Co, R Weischedel, E Hovy, M Marcus, M Palmer, R Belvin, S Pradan, L Ramshaw, N Xue, Proceedings of the International Conference on Spoken Language Processing. the International Conference on Spoken Language ProcessingSpringer VerlagSrilm -an extensible language modeling toolkitStolcke, A. (2002). Srilm -an extensible language model- ing toolkit. In Proceedings of the International Confer- ence on Spoken Language Processing, Denver, CO. Weischedel, R., Hovy, E., Marcus, M., Palmer, M., Belvin, R., Pradan, S., Ramshaw, L., and Xue, N., (2011). OntoNotes: A Large Training Corpus for Enhanced Pro- cessing, pages 54-63. Springer Verlag.
Semantic mappingusing automatic word alignment and semantic role labeling. S Wu, M Palmer, Proceedings of ACL Workshop on Syntax and Structure in Statistical Translation (SSST-5). ACL Workshop on Syntax and Structure in Statistical Translation (SSST-5)Portland, ORWu, S. and Palmer, M. (2011). Semantic mappingusing automatic word alignment and semantic role labeling. In Proceedings of ACL Workshop on Syntax and Structure in Statistical Translation (SSST-5), Portland, OR. |
207,960,468 | With the evolution of network communication technology and advances in multimedia application, speech or data networks over an IP connection are vulnerable to threats. Therefore, the need to protect data attracts many researches on safe communications, especially speech secure communication. Additionally, with the large volume of unprotected speech data transmitted over the internet, Voice over Internet Protocol (VoIP) packets could be lost, and they cannot be recovered back, which would result in a degradation of speech quality. In this paper, we propose a secure speech communication approach based on chaotic cryptography combined with G.722.2 error recovery technique performed by interleaving. On the one hand, this approach uses the interleaving technique on inter-frames of G.722.2 speech in order to make a continuous packet loss becoming an isolated packets loss. On the other hand, speech will be encrypted using chaotic Lorenz system which achieves high encryption efficiency. To evaluate performance, the proposed design was evaluated through Enhanced Modified Bark Spectral Distortion (EMBSD) and Mean Opinion Score (MOS) with different packet loss rates to confirm the efficiency of our proposed scheme. | [] | With the evolution of network communication technology and advances in multimedia application, speech or data networks over an IP connection are vulnerable to threats. Therefore, the need to protect data attracts many researches on safe communications, especially speech secure communication. Additionally, with the large volume of unprotected speech data transmitted over the internet, Voice over Internet Protocol (VoIP) packets could be lost, and they cannot be recovered back, which would result in a degradation of speech quality. In this paper, we propose a secure speech communication approach based on chaotic cryptography combined with G.722.2 error recovery technique performed by interleaving. On the one hand, this approach uses the interleaving technique on inter-frames of G.722.2 speech in order to make a continuous packet loss becoming an isolated packets loss. On the other hand, speech will be encrypted using chaotic Lorenz system which achieves high encryption efficiency. To evaluate performance, the proposed design was evaluated through Enhanced Modified Bark Spectral Distortion (EMBSD) and Mean Opinion Score (MOS) with different packet loss rates to confirm the efficiency of our proposed scheme.
Introduction
Recently, with the development of network communication technology and signal processing techniques, it has become realistic to transmit speech, just like computer data, over the Internet (VoIP: Voice over Internet Protocol). However, the emergence of Internet use became very apparent; and the huge mass of data overloads the network (Mata-Díaz et al., 2014-Labyd et al., 2014.
Networks must provide predictable, secure, measurable, and sometimes guaranteed services. Realizing the required Quality of Service (QoS) by managing the delay, delay variation (jitter), bandwidth, and packet loss parameters on a network become the secret to a successful end-toend business solution. In real-time transmissions, IP networks are unpredictable and offer a besteffort transfer service with no QoS securities. Therefore, packets could be lost, causing an interruption in the conversation and a feeling of hatching of speech that is very annoying for the listeners. Therefore, it is fundamental to put a mechanism for concealing packet loss such as interleaving method, Forward Error Correction (FEC) Ito, 2013 -Shetty andGibson, 2007).
In addition, speech data is vulnerable to corrupted or stolen by the hacker on the internet. For secure communication, it is necessary to protect data using encryption methods (Alvarez and Li, 2006).
Recently, research on chaotic cryptography increased expeditiously in order to improve chaosbased cryptosystems. In 1963, Edward Lorenz founded chaos theory, followed by the discovery of the Rössler attractor in 1976, since several chaotic systems are established (Jiang andFu, 2008 -Kaur andKumar, 2018 is a non-linear, deterministic presenting good properties such as aperiodicity, pseudorandomness and sensitivity to changes in initial conditions, which makes it unpredictable. Because of its characteristics, the chaos was used in the encryption system (Zhang andCao, 2011 -Moon et al. 2017).
In (Afrizal, 2018), the authors' study focus on examining a few speech codec that usually used in connectionless communication such as G.711, G.722, G.729, AMR-NB, and AMR-WB for voice over LTE application and the impact of random and burst packet loss on voice communication against the codec using Evalid and NS-3 simulator. in (Li et al, 2015) the paper describes a method of digital encryption based on Lorenz continuous chaotic system, combined with chaotic dynamics, continuous sequence of numbers generated by the Lorenz chaotic system. Discrete the continuous data through the Euler method. Image encryption as an example, verify the Lorenz chaotic system digital encryption features. In (Guo et al., 2002) authors propose a VoIP technique combining the speech data encryption and G.729 error recovery. This technique uses the chaotic data interleaving on inter-frames of voice to make situation of continuous packet loss becoming an isolated packet loss situation. Then, they propose a Periodical Parameter Re-initialization (PPR) recovery approach to reduce the signal quality degradation in the G.729 decoder due to the lost of state synchronization to the G.729 encoder. Beside the proposed VoIP technique, also uses the idea of chaotic data encryption on intra-frames of speech to scramble the data sequence within a speech frame.
In this paper, we propose a secure speech communication approach based on chaotic cryptography combined with G.722.2 error recovery technique performed by interleaving, and it is organized as follows. In Section 2, an overview of the AMR-WB G.722.2 is introduced. Section 3 gives a very brief description of the proposed technique, which has a direct relation to our contribution. Simulations and interpretation are presented in Section 4. Finally, the conclusion is provided in section 5.
Overview of the AMR-WB G.722.2
The adaptive Multi-Rate Wideband (AMR-WB) speech codec is based on Adaptive Multi-Rate encoding, using similar methodology as algebraic code excited linear prediction (ACELP). AMR-WB is codified as G.722.2, an ITU-T standard speech codec, then was improved by Nokia and VoiceAge and it was first defined by 3GPP.AMR-WB offers enhanced speech quality due to a larger speech bandwidth of 50-7000 Hz compared to narrowband speech coders. G.722 sample audio data at a rate of 16 kHz, it contains nine bit rates of 23. 85, 23.05, 19.85, 18.25, 15.85, 14.25, 12.65, 8.85 and 6.6 kbps, these ones are presented by modes 8, 7, 6, 5, 4, 3, 2, 1 and 0 respectively. To reduce average bit rate, this codec supports the discontinuous transmission (DTX), using Voice Activity Detection (VAD) and Comfort Noise Generation (CNG) algorithms (ITU-T Standard G.722.2, 2003).
The coder works with a frame size of 20-ms and the algorithmic delay for the coder is 25-ms. The AMR-WB G722.2 uses six parameters (VAD-flag, ISP, pitch delay, LTP-filtering, algebraic code, and gain) to represent the speech and these are shown in Figure 1 for bit rate 6,60 kbps. where B1, B2, ...., B133 represent the bit 0 (BIT-0: FF81) or the bit 1 (BIT-1: 007F) of the coder parameters which is codified on 16 bits (WORD16).
The proposed technique
In this study, two techniques are combined employing interleaving and encryption processes. The encoded bitstream will be reordered using the interleaving, then transmitted over lossy IP channel after encryption, channel encoding and modulation. All these steps will be reversed at receiver as depicted in Figure 2.
Interleaving process
Interleaving technique is very useful when the packets contain multiple frames and the end-to-end delay is not important. Before transmission of the bitstream, the frames are re-arranged in such a way that the initially adjacent ones are separated in the transmitted bitstream and then put back in their original order at receiver level. As a result, the packet erase effects are scattered and produce situation of continuous packet loss becoming an isolated packet loss situation (Okamoto, et al., 2014).
Encryption process
Some important properties of chaos, such as the ergodicity, high sensitivity to the changes of control parameters, initial conditions and unpredictable behavior can be used in the generation of random numbers. So we use Lorenz model, the first well known dynamical system, governed by the differential equations (Lorenz, 1963):
{̇= ( − ) ( ) ̇= − − ( ) ̇ = − ( )(1)
where , , are state variables and , , are real constant parameters of the system. With = 10, = 8 3 , = 28, Lorenz system generates a chaotic behavior and its attractor is depicted in Figure 3 The speech encryption algorithm is done in two stages: confusion and diffusion.
Step1: In the confusion stage, the parameters of the frame are permuted by using the keys of Lorenz formula (1-a). So, the values are sorted in decreasing order while safeguarding the position or the index of each key values. then, the position of data speech is changed according to indexes' keys.
Step2: In the diffusion stage, the permuted parameters of frames are substituted formula (1-b) of Lorenz equation. The obtained keys are calculated as follows:
key(i) = [y(i) − floor(y(i))] * 32767 So, the diffusion is performed using operation between data and the key.
Simulation and discussion
In this section, we study the performance in terms of security and recovery quality of lost packets. Several experiments are carried out to test the interleaving and encryption efficiency of the presented wideband speech cryptosystem. The quality of the encrypted interleaved speech and the reconstructed signals is assessed for the standard AMR-WB G.722.2. Thus, the speech file was
Performance of AMR-WB
In our test, a speech file with 198 frames is used which is represented in Figure 4. Recall that encryption uses 9 modes, of which we opt, in our experiments, for mode 0 (6.6 kbps) and mode 7 (23.05 kbps). The EMBSD and MOS assessments of speech quality are given in Figure 6. The values given by the two metrics show that the speech encoded in mode 7 is better than the one encoded in mode 0, while noticing that the original speech (no coding) is the best. A small difference between the original and the encoded speech is because we have a lossy codec. But generally, the encoded speech in both modes is classified good.
Interleaving tests
The encoded speech data will be scrambled using interleaving method. To simulate VoIP network losses, we use two-state Gilbert model. Table 1: shows the loss rates. We use interleaving method to recover the lost packets during network congestion or degradation. Figures 7 and 8 give the obtained results from tests with EMBSD and MOS objective and subjective measurement tool respectively. We can see that the proposed method in both modes performs well than the original for the two losses rates 5% and 10%, contrariwise for the higher i.e more than 10%.
Encryption tests
The speech file was encoded using AMR-WB G.722.2 CS-ACELP. The resulting bitstreams were encrypted using chaotic full encryption performed by both confusion & diffusion processes. Figure 10 depicts the signal inspection in both the time and frequency domains.
We can see from Figures 10-a and 10-b that the encrypted speech signals are similar to the white noise, which indicates that no residual intelligibility can be useful for eavesdroppers at the communication channel. However, the reconstructed speech signals (Figures 10-c and 10-d) using the right keys are the same as the original. To evaluate the efficiency of the encryption schemes, we have used the EMBSD and MOS tools. We can see that the EMBSD (Figure 11) values for the original speech coded in the modes 0 and 7 are near zero which indicates its good quality. In return, significantly greater values increase for encrypted speech data which indicates its worse quality.
Also, the MOS evaluation in Figure 12 confirms and gives scores "Good" for the original speech and "unsatisfactory" for the encrypted one. We can also notice that the quality of the decrypted speech employing the same keys than the encrypted one give a signal quality identical to the original speech.
Combined tests
The speech file will be encoded then scrambled using the interleaving process, in order to make the continuous multiple-packet loss situation to isolated packet loss situation. Next, it is encrypted by chaotic Lorenz mode. Figure 13 shows the combination of interleaving and encryption processes. We can see that, for the two losses rates, the speech data appears as a white noise.
Note: The EMBSD values and MOS scores for the interleaved and encrypted file in mode 0 or mode 7 give the same value than the only encrypted speech which indicate the efficiency of the full encryption.
Conclusion
In this paper, we have presented our proposed method which combines chaos encryption using the Lorenz system and error recovery based on interleaving techniques for the standard ITU-T AMR-WB G.722.2 codec. The purpose of interleaving is to improve speech quality degradation caused by packet losses. In addition, the experimental results and analysis show that the cryptosystem is efficient in terms of security which is suitable for transmission over public transmission channels.
Figure 1 :
1The bitstream of the coder parameters (coder output / decoder input) for the 20-ms frame in mode 0
Figure 2 :
2Proposed scheme of combined speech encryption with error recovery based on interleaving
Figure 3 :
3Phase portraits and chaotic attractors of Lorenz model
Figure 5 :Figure 4
54(a)Decoded speech in mode 0 (b) its Spectrogram Figure 5 shows the speech decoded in mode 0.We can see that the original and the decoded speech seem identical in waveforms (
Figure 6 :
6EMBSD and MOS scores
Figure 7 :Figure 8 :
78EMBSD values for interleaving MOS scores for interleaving we can confirm that by analyzing the audiograms speech slices inFigure 9.
Figure 9 :
9Portion of G.722.2 speech in mode 0: (a) original speech (b) Original speech with packet loss (10%)(c) Using interleaving with packet loss (10%)
Figure 10 :
10Full encryption using mode0 of WB-G722.2: (a) Decoded encrypted speech (b) its spectrogram (c) decoded decrypted speech (d) its spectrogram
Figure 11 :
11EMBSD values for full encryption
Figure 12 :
12MOS scores for full encryption
Figure 13: Interleaved & encrypted speech decoded in mode 0 Figure 14 shows the speech audiograms of the proposed schema.
Figure 14 :
14Portion of G.722.2 speech in mode 0: (a) original speech (b) Using interleaving & encryption with packet loss (10%) (c) Using interleaving & encryption with packet loss (10%)
). A chaotic systemSpeech Coding Combining Chaos Encryption
and Error Recovery for G.722.2 Codec
Messaouda Boumaraf and Fatiha Merazka
LISIC Laboratory, Telecommunications Department
USTHB University, Algiers, Algeria
boumaraf.messa@gmail.com, fmerazka@usthb.dz
A simple closed-form approximation for the packet loss rate of a TCP connection over wireless links. J Mata-Díaz, J Alins, J L Muñoz, O Esparza, IEEE Communications Letters. 189Mata-Díaz, J., Alins, J., Muñoz, J. L., and Esparza, O. 2014. A simple closed-form approximation for the packet loss rate of a TCP connection over wireless links. IEEE Communications Letters, 18(9), 1595- 1598.
Impact of Using G. 729 on the Voice over LTE Performance. Y Labyad, M Moughit, A Marzouk, Haqiq , A , International Journal of Innovative Research in Computer and Communication Engineering. 210Labyad, Y., MOUGHIT, M., Marzouk, A., and HAQIQ, A. 2014. Impact of Using G. 729 on the Voice over LTE Performance. International Journal of Innovative Research in Computer and Communication Engineering, 2(10), 5974-5981.
Performance Evaluation for Voice over LTE by using G. 711 as a Codec. Y Labyd, M Moughit, A Marzouk, A Haqiq, International Journal of Engineering Research and Technology. 310Labyd, Y., Moughit, M., Marzouk, A., and Haqiq, A. 2014. Performance Evaluation for Voice over LTE by using G. 711 as a Codec. International Journal of Engineering Research and Technology, 3(10), 758- 763.
A Packet Loss Recovery of G. 729 speech using discriminative model and Ngram. T Nagano, A Ito, 2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing. Nagano, T., and Ito, A. 2013. A Packet Loss Recovery of G. 729 speech using discriminative model and N- gram. In 2013 Ninth International Conference on Intelligent Information Hiding and Multimedia Signal Processing.2013: 267-270.
Singleended packet loss rate estimation of transmitted speech signals. Gabriel Mittag, Sebastian Möller, 2018 IEEE Global Conference on Signal and Information Processing. MITTAG, Gabriel et MÖLLER, Sebastian. Single- ended packet loss rate estimation of transmitted speech signals. In : 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP). 2018: 226-230.
Packet Loss Concealment for G. 722 using Side Information with Application to Voice over Wireless LANs. N Shetty, J D Gibson, Journal of Multimedia. 23Shetty, N., and Gibson, J. D. 2007. Packet Loss Concealment for G. 722 using Side Information with Application to Voice over Wireless LANs. Journal of Multimedia, 2(3).
Some basic cryptographic requirements for chaos-based cryptosystems. G Alvarez, Li , S , International journal of bifurcation and chaos. 1608Alvarez, G., and Li, S. 2006. Some basic cryptographic requirements for chaos-based cryptosystems. International journal of bifurcation and chaos, 16(08): 2129-2151.
An image encryption scheme based on Lorenz chaos system. H Y Jiang, C Fu, Fourth International Conference on Natural Computation. 4Jiang, H. Y., and Fu, C. 2008. An image encryption scheme based on Lorenz chaos system. Fourth International Conference on Natural Computation. 4: 600-604.
Secure digital communication based on Lorenz stream cipher. A S Alshammari, M I Sobhy, P Lee, 30th IEEE International System-on-Chip Conference (SOCC). 2017Alshammari, A. S., Sobhy, M. I., and Lee, P. 2017. Secure digital communication based on Lorenz stream cipher. 30th IEEE International System-on- Chip Conference (SOCC).2017: 23-28.
An image encryption scheme based on cat map and hyperchaotic lorenz system. J Zhang, IEEE International Conference on Computational Intelligence & Communication Technology. Zhang, J. 2015. An image encryption scheme based on cat map and hyperchaotic lorenz system. IEEE International Conference on Computational Intelligence & Communication Technology . 78-82
Efficient image encryption method based on improved Lorenz chaotic system. M Kaur, V Kumar, Electronics Letters. 549Kaur, M., and Kumar, V. 2018. Efficient image encryption method based on improved Lorenz chaotic system. Electronics Letters, 54(9): 562-564.
A chaos-based image encryption scheme with confusion-diffusion architecture. Z X Zhang, T Cao, International Conference on Computer Science and Information Engineering. Berlin, HeidelbergSpringerZhang, Z. X., and Cao, T. 2011. A chaos-based image encryption scheme with confusion-diffusion architecture. In International Conference on Computer Science and Information Engineering. Springer, Berlin, Heidelberg. 258-263.
A chaos-based symmetric image encryption scheme using a bit-level permutation. Z L Zhu, W Zhang, K W Wong, Yu , H , Information Sciences. 1816Zhu, Z. L., Zhang, W., Wong, K. W., and Yu, H. 2011. A chaos-based symmetric image encryption scheme using a bit-level permutation. Information Sciences, 181(6): 1171-1186.
A fast image encryption scheme based on chaotic standard map. K W Wong, B S H Kwok, W S Law, Physics Letters A. 37215Wong, K. W., Kwok, B. S. H., and Law, W. S. 2008. A fast image encryption scheme based on chaotic standard map. Physics Letters A, 372(15): 2645- 2652.
Evaluating the permutation and diffusion operations used in image encryption based on chaotic maps. B Wang, Y Xie, C Zhou, S Zhou, X Zheng, Optik-International Journal for Light and Electron Optics. 1277Wang, B., Xie, Y., Zhou, C., Zhou, S., and Zheng, X. 2016. Evaluating the permutation and diffusion operations used in image encryption based on chaotic maps. Optik-International Journal for Light and Electron Optics, 127(7): 3541-3545.
Periodicity and chaos of high-order Lorenz systems. S Moon, B S Han, J Park, J M Seo, J J Baik, International Journal of Bifurcation and Chaos. 27111750176Moon, S., Han, B. S., Park, J., Seo, J. M., and Baik, J. J. 2017. Periodicity and chaos of high-order Lorenz systems. International Journal of Bifurcation and Chaos, 27(11): 1750176
Impact of Random and Burst Packet Loss on Voice Codec G. G Afrizal, Amr-Wb Amr-Nb, 4th International Conference on Wireless and Telematics (ICWT). 711Afrizal, G. 2018. Impact of Random and Burst Packet Loss on Voice Codec G. 711, G. 722, G. 729, AMR- NB, AMR-WB. In 2018 4th International Conference on Wireless and Telematics (ICWT). 2018: 1-4.
Digital encryption method based on lorenz continuous chaotic system. W Li, Q Zhang, Q Ding, Fifth International Conference on Instrumentation and Measurement. 2015Li, W., Zhang, Q., and Ding, Q. 2015. Digital encryption method based on lorenz continuous chaotic system. Fifth International Conference on Instrumentation and Measurement, Computer, Communication and Control (IMCCC). 2015: 262- 266.
An efficient voice over Internet protocol technique combining the speech data encryption and G. 729 error recovery. J I Guo, C C Lin, M C Tsai, S W Lin, Proc. Int. Computer Symposium (ICS'2002. Int. Computer Symposium (ICS'2002Guo, J. I., Lin, C. C., Tsai, M. C., & Lin, S. W. 2002. An efficient voice over Internet protocol technique combining the speech data encryption and G. 729 error recovery. In Proc. Int. Computer Symposium (ICS'2002). 2002
Wideband coding of speech at around 16 kbps using Adaptive Multi-RateWideband. G Itu-T Standard, AMR-WBITU-T Standard G.722.2, 2003.Wideband coding of speech at around 16 kbps using Adaptive Multi- RateWideband (AMR-WB).
Subjective evaluation of packet loss recovery techniques for voice over IP. M Okamoto, T Nose, A Ito, T Nagano, 2014 International Conference on Audio, Language and Image Processing. Okamoto, M., Nose, T., Ito, A., and Nagano, T. 2014. Subjective evaluation of packet loss recovery techniques for voice over IP. In 2014 International Conference on Audio, Language and Image Processing. 2014: 711-714.
Deterministic non periodic flow. E N Lorenz, Journal of the atmospheric sciences. 202Lorenz, E. N. 1963. Deterministic non periodic flow. Journal of the atmospheric sciences, 20(2): 130- 141.
Enhanced Modified Bark Spectral Distortion (EMBSD): An Objective Speech Quality Measure Based on Audible Distortion and Cognitive Model. W Yang, Temple UniversityYang, W. 1999. Enhanced Modified Bark Spectral Distortion (EMBSD): An Objective Speech Quality Measure Based on Audible Distortion and Cognitive Model. Temple University.
Mean opinion score (MOS) terminology. Recommendation P.800. Itu-T, 1ITU-T. 2006. Mean opinion score (MOS) terminology. Recommendation P.800.1. |
|
14,585,731 | To support context-based multimodal interpretation in conversational systems, we have developed a semantics-based representation to capture salient information from user inputs and the overall conversation. In particular, we present three unique characteristics: finegrained semantic models, flexible composition of feature structures, and consistent representation at multiple levels. This representation allows our system to use rich contexts to resolve ambiguities, infer unspecified information, and improve multimodal alignment. As a result, our system is able to enhance understanding of multimodal inputs including those abbreviated, imprecise, or complex ones. | [
851193,
2570492
] | IBM T. J. Watson Research Center
19 Skyline Drive Hawthorne10532NYUSA
To support context-based multimodal interpretation in conversational systems, we have developed a semantics-based representation to capture salient information from user inputs and the overall conversation. In particular, we present three unique characteristics: finegrained semantic models, flexible composition of feature structures, and consistent representation at multiple levels. This representation allows our system to use rich contexts to resolve ambiguities, infer unspecified information, and improve multimodal alignment. As a result, our system is able to enhance understanding of multimodal inputs including those abbreviated, imprecise, or complex ones.
Introduction
Inspired by earlier works on multimodal interfaces (e.g., Bolt, 1980;Cohen el al., 1996;Wahlster, 1991;Zancanaro et al., 1997), we are currently building an intelligent infrastructure, called Responsive Information Architect (RIA) to aid users in their information-seeking process. Specifically, RIA engages users in a full-fledged multimodal conversation, where users can interact with RIA through multiple modalities (speech, text, and gesture), and RIA can act/react through automated multimedia generation (speech and graphics) (Zhou and Pan 2001). Currently, RIA is embodied in a testbed, called Real Hunter TM , a real-estate application to help users find residential properties.
As a part of this effort, we are building a semantics-based multimodal interpretation framework MIND (Multimodal Interpretation for Natural Dialog) to identify meanings of user multimodal inputs. Traditional multimodal interpretation has been focused on integrating multimodal inputs together with limited consideration on the interaction context. In a conversation setting, user inputs could be abbreviated or imprecise. Only by combining multiple inputs together often cannot reach a full understanding. Therefore, MIND applies rich contexts (e.g., conversation context and domain context) to enhance multimodal interpretation. In support of this context-based approach, we have designed a semantics-based representation to capture salient information from user inputs and the overall conversation.
In this paper, we will first give a brief overview on multimodal interpretation in MIND. Then we will present our semantics-based representation and discuss its characteristics. Finally, we will describe the use of this representation in context-based multimodal interpretation and demonstrate that, with this representation, MIND is able to process a variety of user inputs including those ambiguous, abbreviated and complex ones.
Multimodal Interpretation
To interpret user multimodal inputs, MIND takes three major processes as in Figure 1: unimodal understanding, multimodal understanding, and discourse understanding. During unimodal understanding, MIND applies modality specific recognition and understanding components (e.g., a speech recognizer and a language interpreter) to identify meanings from each unimodal input, and captures those meanings in a representation called modality unit. During multimodal understanding, MIND combines semantic meanings of unimodal inputs (i.e., modality units), and uses contexts (e.g., conversation context and domain context) to form an overall understanding of user multimodal inputs. Such an overall understanding is then captured in a representation called conversation unit. Furthermore, MIND also identifies how an input relates to the overall conversation discourse through discourse understanding. In particular, MIND uses a representation called conversation segment to group together inputs that contribute to a same goal or sub-goal (Grosz and Sidner, 1986). The result of discourse understanding is an evolving conversation history that reflects the overall progress of a conversation. Figure 2 shows a conversation fragment between a user and MIND. In the first user input U1, the deictic Figure 3) is ambiguous. It is not clear which object the user is pointing at: two houses nearby or the town of Irvington 1 . The third user input U3 by itself is incomplete since the purpose of the input is not specified. Furthermore, in U4, a single deictic gesture overlaps (in terms of time) with both "this style" and "here" from the speech input, it is hard to determine which one of those two references should be aligned and fused with the gesture. Finally, U5 is also complex since multiple objects ("these two houses") specified in the speech input need to be unified with a single deictic gesture.
gesture (shown in
This example shows that user multimodal inputs exhibit a wide range of varieties. They could be abbreviated, ambiguous or complex. Fusing inputs together often cannot reach a full understanding. To process these inputs, contexts are important.
Semantics-based Representation
To support context-based multimodal interpretation, both representation of user inputs and representation of contexts are crucial. Currently, MIND uses three types of contexts: domain context, conversation context, and visual context. The domain context provides domain knowledge. The conversation context reflects the progress of the overall conversation. The visual context gives the detailed semantic and syntactic structures of visual objects and their relations. In this paper, we focus on representing user inputs and the conversation context. In particular, we discuss two aspects of representation: semantic models that capture salient information and structures that represent those semantic models.
Semantic Models
When two people participate in a conversation, their understanding of each other's purposes forms strong constraints on how the conversation is going to proceed. Especially, in a conversation centered around information seeking, understanding each other's information needs is crucial. Information needs can be characterized by two main aspects: motivation for seeking the information of interest and the information sought itself. Thus, MIND uses an intention model to capture the first aspect and an attention model to capture the second. Furthermore, since users can use different ways to specify their information of interest, MIND also uses a constraint model to capture different types of constraints that are important for information seeking.
Intention and Attention
Intention describes the purpose of a message. In an information seeking environment, intention indicates the motivation or task related to the information of interest. An intention is modeled by three dimensions: Motivator indicating one of the three high level purposes: DataPresentation, DataAnalysis (e.g., comparison), and ExceptionHandling (e.g., clarification), Act specifying whether the input is a request or a reply, and Method indicating a specific task, e.g., Search (activating the relevant objects based on some criteria) or Lookup (evaluating/retrieving attributes of objects).
Attention relates to objects, relations that are salient at each point of a conversation. In an information seeking environment, it relates to the information sought. An attention model is characterized by six dimensions. Base indicates the semantic type of the information of interest (e.g., House, School, or City which are defined in our domain ontology). Topic specifies the granularity of the information of interest (e.g., Instance or Collection). Focus identifies the scope of the topic as to whether it is about a particular feature (i.e., SpecficAspect) or about all main features (i.e., MainAspect). Aspect provides specific features of the topic. Constraint describes constraints to be satisfied (described later). Content points to the actual data. The intention and attention models were derived based on preliminary studies of user information needs in seeking for residential properties. The details are described in (Chai et al., 2002).
For example, Figure 4(a-b) shows the Intention and Attention identified from U1 speech and gesture input respectively. Intention in Figure 4 Speech: This is a Victorian style house. I find seven Victorian houses in White Plains. Graphics: Show seven houses in White Plains
R4:
Speech: Show me houses with this style around here Gesture: Point to a position east of Irvington on the map U4:
Speech: This house costs 320,000 dollars. Graphics: Highlight the house icon and show a picture R3:
Speech: What about this one? Gesture: Point to a house icon on the screen U3:
Speech: The green house costs 250,000 dollars. R2:
Speech: The green one. U2:
Speech: Which house are you interested in? Graphics: Highlight two house icons R1:
Speech: How much is this? Gesture: Point to the screen (not directly on any object) U1:
A collection of houses are shown on the map of Irvington
Speech: Here is the comparison chart. Graphics: Show a chart R5:
Speech: Compare these two houses with the previous house. Graphics: Point to the corner of the screen where two house icons are displayed U5:
Speech: This is a Victorian style house. I find seven Victorian houses in White Plains. Graphics: Show seven houses in White Plains
R4:
Speech: Show me houses with this style around here Gesture: Point to a position east of Irvington on the map U4:
Speech: This house costs 320,000 dollars. Graphics: Highlight the house icon and show a picture R3:
Speech: What about this one? Gesture: Point to a house icon on the screen U3:
Speech: The green house costs 250,000 dollars. R2:
Speech: The green one. U2:
Speech: Which house are you interested in? Graphics: Highlight two house icons R1:
Speech: How much is this? Gesture: Point to the screen (not directly on any object) U1:
A collection of houses are shown on the map of Irvington Figure 3. An example of graphics output user points here certain object(s) (Method: Lookup). The Attention indicates that the information of interest is about the price (Aspect: Price) of a certain object (Focus: Instance). The exact object is not known but is referred by a demonstrative "this" (in Constraint). Intention in Figure 4(b) does not have any information since the high level purpose and the specific task cannot be identified from the gesture input. Furthermore, because of the ambiguity of the deictic gesture, three Attentions are identified. The first two Attentions are about house instances MLS0234765 and MLS0876542 (ID from Multiple Listing Service) and the third is about the town of Irvington.
Constraints
In an information seeking environment, based on the conversation context and the graphic display, users can refer to objects using different types of references, for example, through temporal or spatial relations, visual cues, or simply a deictic gesture. Furthermore, users can also search for objects using different constraints on data properties. Therefore, MIND models two major types of constraints: reference constraints and data constraints. Reference constraints characterize different types of references. Data constraints specify relations of data properties. A summary of our constraint model is shown in Figure 5. Both reference constraints and data constraints are characterized by six dimensions. Category sub-categorizes constraints (described later). Manner indicates the specific way such a constraint is expressed. Aspect indicates a feature (features) this constraint is concerned about. Relation specifies the relation to be satisfied between the object of interest and other objects or values. Anchor provides a particular value, object or a reference point this constraint relates to. Number specifies cardinal numbers that are associated with the constraint.
Reference Constraints
Reference constraints are further categorized into four categories: Anaphora, Temporal, Visual, and Spatial. An anaphora reference can be expressed through pronouns such as "it" or "them" (Pronoun), demonstratives such as "this" or "these" (Demonstrative), here or there (Here/There), or proper names such as "Lynhurst" (ProperNoun). An example is shown in Figure 4(a), where a demonstrative "this" (Manner: Demonstrative-This) is used in the utterance "this house" to refer to a single house object (Number: 1). Note that Manner also keeps track of the specific type of the term. The subtle difference between terms can provide additional cues for resolving references. For example, the different use of "this" and "that" may indicate the recency of the referent in the user mental model of the discourse, or the closeness of the referent to the user's visual focus.
Temporal references use temporal relations to refer to entities that occurred in the prior conversation. Manner is characterized by Relative and Absolute. Relative indicates a temporal relation with respect to a certain point in a conversation, and Absolute specifies a temporal relation regarding to the whole interaction. Relation indicates the temporal relations (e.g., Precede or Succeed) or ordinal relations (e.g., first). Anchor indicates a reference point. For example, as in Figure 6(a), a Relative temporal constraint is used since "the previous house" refers to the house that precedes the current focus (Anchor: Current) in the conversation history. On the other hand, in the input: "the first house you showed me," an Absolute temporal constraint is used since the user is interested in the first house shown to her at the beginning of the entire conversation.
Spatial references describe entities on the graphic display in terms of their spatial relations. Manner is again characterized by Absolute and Relative. Absolute indicates that entities are specified through orientations (e.g., left or right, captured by Relation) with respect to the whole display screen (Anchor: Display-Frame). In contrast, Relative specifies that entities are described through orientations with respect to a particular sub-frame (Anchor: FocusFrame, e.g., an area Visual references describe entities on the graphic output using visual properties (such as displaying colors or shapes) or visual techniques (such as highlight). Manner of Comparative indicates a visual entity is compared with another value (captured by Anchor). Aspect indicates the visual entity used (such as Color and Shape, which are defined in our domain ontology). Relation specifies the relation to be satisfied between the visual entity and some value. For example, constraint used in the input "the green house" is shown in Figure 6(b). It is worth mentioning that during reference resolution, the color Green will be further mapped to the internal color encoding used by graphics generation.
Data Constraints
Data constraints describe objects in terms of their actual data attributes (Category: Attributive). The Manner of Comparative indicates the constraint is about a comparative relation between (aspects of) the desired entities with other entities or values. Superlative indicates the constraint is about minimum or maximum requirement(s) for particular attribute(s). Fuzzy indicates a fuzzy description on the attributes (e.g., "cheap house"). For example, for the input "houses under 300,000 dollars" in Figure 7(a), Manner is Comparative since the constraint is about a "less than" relationship (Relation: Less-Than) between the price (Aspect: Price) of the desired object(s) and a particular value (Anchor: "300000 dollars"). For the input "3 largest houses" in Figure 7(b), Manner is Superlative since it is about the maximum (Relation: Max) requirement on the size of the houses (Aspect: Size).
The refined characterization of different constraints provides rich cues for MIND to identify objects of interest. In an information seeking environment, the objects sought can come from different sources. They could be entities that have been described earlier in the conversation, entities that are visible on the display, or entities that have never been mentioned or seen but exist in a database. Thus, fine-grained constraints allow MIND to determine where and how to find the information of interest. For example, temporal constraints help MIND navigate the conversation history by providing guidance on where to start, which direction to follow in the conversation history, and how many to look for.
Our fine-grained semantic models of intention, attention and constraints characterize user information needs and therefore enable the system to come up with an intelligent response. Furthermore, these models are domain independent and can be applied to any information seeking applications (for structured information).
Representing User Inputs
Given the semantic models of intention, attention and constraints, MIND represents those models using a combination of feature structures (Carpenter, 1992). This representation is inspired by the earlier works (Johnston et al., 1997;Johnston, 1998) and offers a flexibility to accommodate complex inputs. Specifically, MIND represents intention, attention and constraints identified from user inputs as a result of both unimodal understanding and multimodal understanding.
During unimodal understanding, MIND applies a decision tree based semantic parser on natural language inputs (Jelinek et al., 1994) to identify salient information. For the gesture input, MIND applies a simple geometry-based recognizer. As a result, information from each unimodal input is represented in a modality unit. We have seen several modality units (in Figure 4, Figure 6, and Figure 7), where intention, attention and constraints are represented in feature structures. Note that only features that can be instantiated by information from the user input are included in the feature structure. For example, since the exact object cannot be identified from U1 speech input, the Content feature is not included in its Attention structure (Figure 4a). In addition to intention, attention and constraints, a modality unit also keeps a time stamp that indicates when a particular input takes place. This time information is used for multimodal alignment which we do not discuss here.
Depending on the complexity of user inputs, the representation can be composed by a flexible combi- Figure 7. Attributive data constraints nation of different feature structures. Specifically, an attention structure may have a constraint structure as its feature, and on the other hand, a constraint structure may also include another attention structure.
For example, U4 in Figure 2 is a complex input, where the speech input "what about houses with this style around here" consists of multiple objects with different relations. The modality unit created for U4 speech input is shown in Figure 8(a). The Attention feature structure (A1) contains two attributive constraints indicating that the objects of interest are a collection of houses that satisfy two attributive constraints. The first constraint is about the style (Aspect: Style), and the second is about the location. Both of these constraints are related to other objects (Manner: Comparative), which are represented by Attention structures A2 and A3 through Anchor respectively. A2 indicates an unknown object that is referred by a Demonstrative reference constraint (this style), and A3 indicates a geographic location object referred by HERE. Since these two references are overlapped with a single deictic gesture, it is hard to decide which one should be unified with the gesture input. We will show in Section 4.3 that the fine-grained representation in Figure 8(a) allows MIND to use contexts to resolve these two references and improve alignment.
During multimodal understanding, MIND combines information from modality units together and generates a conversation unit that represents the overall meaning of user multimodal inputs. A conversation unit also has the same type of intention and attention feature structures, as well as the feature structure for data constraints. Since references are resolved during the multimodal understanding process, the reference constraints are no longer present in conversation units. For example, once two references in Figure 8(a) are resolved during multimodal understanding (details are described in Section 4.3), and MIND identifies "this style" is "Victorian" and "here" is "White Plains", it creates a conversation unit representing the overall meanings of this input in Figure 8(b).
Representing Conversation Context
MIND uses a conversation history to represent the conversation context based on the goals or sub-goals of user inputs and RIA outputs. For example, in the conversation fragment mentioned earlier (Figure 2), the first user input (U1) initiates a goal of looking up the price of a particular house. Due to the ambiguous gesture input, in the next turn, RIA (R2) initiates a sub-goal of disambiguating the house of interest. This sub-goal contributes to the goal initiated by U1. Once the user replies with the house of interest (U2), the sub-goal is fulfilled. Then RIA gives the price information (R2), and the goal initiated by U1 is accompolished. To reflect this progress, our conversation history is a hierarchical structure which consists of conversation segments and conversation units (in Figure 9). As mentioned earlier, a conversation unit records user (rectangle U1, U2) or RIA (rectangle R1, R2) overall meanings at a single turn in the conversation. These units can be grouped together to form a conversation segment (oval DS1, DS2) based on their goals and sub-goals. Furthermore, a conversation segment contains not only intention and attention, but also other information such as the conversation initiating participant (Initiator). In addition to conversation segments and conversation units, a conversation history also maintains different relations between segments and between units. Details can be found in (Chai et al., 2002).
Another main characteristic of our representation is the consistent representation of intention and attention across different levels. Just like modality units and conversation units, conversation segments also consist of the same type of intention and attention feature structures (as shown in Figure 9). This consistent representation not only supports unification based multimodal fusion, but also enables contextbased inference to enhance interpretation (described later).
We have described our semantics-based representation and presented three characteristics: fine-grained semantic models, flexible composition, and consistent representation. Next we will show that how this representation is used effectively in the multimodal interpretation process.
The Use of Representation in Multimodal Interpretation
As mentioned earlier, multimodal interpretation in MIND consists of three processes: unimodal understanding, multimodal understanding and discourse understanding. Here we focus on multimodal understanding. The key difference between MIND and earlier works is the use of rich contexts to improve understanding. Specifically, multimodal understanding consists of two sub-processes: multimodal fusion and context-based inference. Multimodal fusion fuses intention and attention structures (from modality units) for unimodal inputs and forms a combined representation. Context-based inference uses rich con- texts to improve interpretation by resolving ambiguities, deriving unspecified information, and improving alignment.
Resolving Ambiguities
User inputs could be ambiguous. For example, in U1, the deictic gesture is not directly on a particular object. Fusing intention and attention structures from each individual inputs presents some ambiguities. For example, in Figure 4(b), there are three Attention structures for U1 gesture input. Each of them can be unified with the Attention structure from U1 speech input (in Figure 4a). The result of fusion is shown in Figure 10(a). Since the reference constraint in the speech input (Number: 1 in Figure 4a) indicates that only one attention structure is allowed, MIND uses contexts to eliminate inconsistent structures. In this case, A3 in Figure 10(a) indicates the information of interest is about the price of the city Irvington. Based on the domain knowledge that the city object cannot have the price feature, A3 is filtered out. As a result, both A1 and A2 are potential interpretation. Therefore, the Content in those structures are combined using a disjunctive relation as in Figure 10(b). Based on this revised conversation unit, RIA is able to arrange the follow-up question to further disambiguate the house of interest (R2 in Figure 2). This example shows that, modeling semantic information by fine-grained dimensions supports the use of domain knowledge in context-based inference, and can therefore resolve some ambiguities.
Deriving Unspecified Information
In a conversation setting, user inputs are often abbreviated. Users tend to only provide new information when it is their turn to interact. Sometimes, fusing individual modalities together still cannot provide overall meanings of those inputs. For example, after multimodal fusion, the conversation unit for U3 ("What about this one") does not give enough information on what the user exactly wants. The motivation and task of this input is not known as in Figure 11(a). Only based on the conversation context, is MIND able to identify the overall meaning of this input. In this case, based on the most recent conversation segment (DS1) in Figure 9 (also as in Figure 11b), MIND is able to derive Motivator and Method features from DS1 to update the conversation unit for U3 (Figure 11c). As a result, this revised conversation unit provides the overall meaning that the user is interested in finding out the price information about another house MLS7689432. Note that it is important to maintain a hierarchical conversation history based on goals and subgoals. Without such a hierarchical structure, MIND would not be able to infer the motivation of U3. Furthermore, because of the consistent representation of intention and attention at both the discourse level (in conversation segments) and the input level (in conversation units), MIND is able to directly use conversation context to infer unspecified information and enhance interpretation.
Improving Alignment
In a multimodal environment, users could use different ways to coordinate their speech and gesture inputs. In some cases, one reference/object mentioned in the speech input coordinates with one deictic gesture (U1, U3). In other cases, several references/ objects in the speech input are coordinated with one deictic gesture (U4, U5). In the latter cases, only using time stamps often cannot accurately align and fuse the respective attention structures from each modality. Therefore, MIND uses contexts to improve alignment based on our semantics-based representation. For example, from the speech input in U4 ("show me houses with this style around here"), three Attention structures are generated as shown in Figure 8(a). From the gesture input, only one Attention structure is generated which corresponds to the city of White Plains. Since the gesture input overlaps with both "this style" (corresponding to A2) and "here" (corresponding to A3), there is no obvious temporal relation indicating which of these two references should be unified with the deictic gesture. In fact, both A2 and A3 are potential candidates. Based on the domain context that a city cannot have a feature Style, MIND determines that the deictic gesture is actually resolving the refer- ence of "here". To resolve the reference of "this style", MIND uses the visual context which indicates a house is highlighted on the screen. A recent study (Kehler, 2000) shows that objects in the visual focus are often referred by pronouns, rather than by full noun phrases or deictic gestures. Based on this study, MIND is able to infer that most likely "this style" refers to the style of the highlighted house (MLS7689432). Suppose the style is "Victorian", then MIND is able to figure out that the overall meaning of U4 is looking for houses with a Victorian style and located in White Plains (as shown in Figure 8b). Furthermore, for U5 ("Comparing these two houses with the previous house"), there are two Attention structures (A1 and A2) created for the speech input as in Figure 12(a). A1 corresponds to "these two houses", where the Number feature in the reference constraint is set 2. Although there is only one deictic gesture which points to two potential houses (Figure 12b), MIND is able to figure out that this deictic gesture is actually referring to a group of two houses rather than an ambiguous single house. Although the gesture input in U5 is the same kind as that in U1, because of the fine-grained information captured from the speech input (i.e., Number feature), MIND processes them differently. For the second reference of "previous house" (A2 in Figure 12a), based on the information captured in the temporal constraint, MIND searches the conversation history and finds the most recent house explored (MLS7689432). Therefore, MIND is able to reach an overall understanding of U5 that the user is interested in comparing three houses (as in Figure 12c).
Conclusion
To facilitate multimodal interpretation in conversational systems, we have developed a semantics-based representation to capture salient information from user inputs and the overall conversation. In this paper, we have presented three unique characteristics of our representation. First, our representation is based on fine grained semantic models of intention, attention and constraints that are important in information seeking conversation. Second, our representation is composed by a flexible combination of feature structures and thus supports complex user inputs. Third, our representation of intention and attention is consistent at different levels and therefore facilitates context-based interpretation. This semantics-based representation allows MIND to use contexts to resolve ambiguities, derive unspecified information and improve alignment. As a result, MIND is able to process a large variety of user inputs including those incomplete, ambiguous or complex ones.
Acknowledgement
The author would like to thank Shimei Pan and Michelle Zhou for their contributions on semantic models.
Figure 1 .
1MIND
(a) indicates the user is requesting RIA (Act: Request) to present her some data (Motivator: DataPresentation) about attributes of 1 The generated display has multiple layers, where the house icons are on top of the Irvington town map. Thus this deictic gesture could either refer to the town of Irvington or houses.
Figure 2 .
2A conversation fragmentSpeech: Here is the comparison chart. Graphics: Show a chart R5:Speech: Compare these two houses with the previous house. Graphics: Point to the corner of the screen where two house icons are displayed U5:
Figure 4 .Figure 5 .
45Intention Anaphora Manner: Demonstrative(THIS) Number: 1 Base: City Topic: Instance Content: {"Irvington"} (a) U1 speech: "How much is this" Base: House Topic: Instance Content: {MLS0876542} Constraint model
Figure 6 .
6Temporal and visual reference constraints Base: House Topic: Instance Constraint: Attention Cateogry: Temporal Manner: Relative Relation: Precede Anchor: Current Number: 1 (a) " the previous house" (b) " the green house" Base: House Topic: Instance Constraint: Attention Cateogry: Visual Manner: Comparative Aspect: Color Relation: Equals Anchor: "Green" Number: 1 with highlighted objects) or another object.
Figure 9
9Figure 9. A fragment of a conversation history
Figure 10 .
10Resolving ambiguity for U1 Motivator: DataPresentation Act: Request Method: Lookup
Intention Base :
BaseHouse Topic: Instance Focus: SpecificAspect Aspect: Price Content:{MLS0234765} Base: House Topic: Instance Focus: SpecificAspect Aspect: Price Content:{MLS0876542} Base: City Topic: Instance Focus: SpecificAspect Aspect: Price Content:{"Irvington"} Motivator: DataPresentation Act: Request Method: Lookup Base: House Topic: Instance Focus: SpecificAspect Aspect: Price Content:{MLS0234765 | MLS0876542} Base: House Topic: Instance Focus: SpecificAspect Aspect: Price Content: {MLS0234765}Attention
Intention
Attention
A 1
A 2
A 3
(a) Conversation unit for U1 as a
result of multimodal fusion
(b) Revised conversation unit for U1 as a
result of context-based inference
Figure 11. Deriving unspecified information for U3
Act: Request
Intention
Base: House
Topic: Instance
Content: {MLS7689432}
Attention
U3
Motivator: DataPresentation
Method: Lookup
Intention
Attention
DS1
Initiator: User
(a) Conversation unit for U3 as a
result of multimodal fusion
(b) Conversation segment DS1 in
the conversation history
Motivator: DataPresentation
Act: Request
Method: Lookup
Intention
Base: House
Topic: Instance
Focus: SpecificAspect
Aspect: Price
Content: {MLS7689432}
Attention
U1
(c) Revised conversation unit for U3 as
a result of context-based inference
Voice and gesture at the graphics interface. R Bolt, Computer Graphics. Bolt, R. (1980) Voice and gesture at the graphics inter- face. Computer Graphics, pages 262-270.
The logic of typed feature structures. R Carpenter, Cambridge University PressCarpenter, R. (1992) The logic of typed feature struc- tures. Cambridge University Press.
MIND: A Semantics-based multimodal interpretation framework for conversational systems. J Chai, S Pan, M X Zhou, Proceedings of International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialog Systems. International CLASS Workshop on Natural, Intelligent and Effective Interaction in Multimodal Dialog SystemsTo appear inChai, J.; Pan, S.; and Zhou, M. X. (2002) MIND: A Se- mantics-based multimodal interpretation framework for conversational systems. To appear in Proceedings of International CLASS Workshop on Natural, Intelli- gent and Effective Interaction in Multimodal Dialog Systems.
Quickset: Multimodal interaction for distributed applications. P Cohen, M Johnston, D S Mcgee, S Oviatt, J Pittman, I Smith, L Chen, J Clow, Proc. ACM MM'96. ACM MM'96Cohen, P.; Johnston, M.; McGee, D.; S. Oviatt, S.; Pitt- man, J.; Smith, I.; Chen, L; and Clow, J. (1996) Quick- set: Multimodal interaction for distributed applications. Proc. ACM MM'96, pages 31-40.
Attention, intentions, and the structure of discourse. B J Grosz, C Sidner, Computational Linguistics. 123Grosz, B. J. and Sidner, C. (1986) Attention, intentions, and the structure of discourse. Computational Linguis- tics, 12(3):175-204.
Decision tree parsing using a hidden derivation model. F Jelinek, J Lafferty, D M Magerman, R Mercer, S Roukos, Proc. Darpa Speech and Natural Language Workshop. Darpa Speech and Natural Language WorkshopJelinek, F.; Lafferty, J.; Magerman, D. M.; Mercer, R. and Roukos, S. (1994) Decision tree parsing using a hidden derivation model. Proc. Darpa Speech and Nat- ural Language Workshop.
Unification based multimodal integration. M Johnston, P R Cohen, D Mcgee, S L Oviatt, J A Pittman, I Smith, Proc. 35th ACL. 35th ACLJohnston, M.; Cohen, P. R.; McGee, D.; Oviatt, S. L.; Pittman, J. A.; and Smith, I. (1997) Unification based multimodal integration. Proc. 35th ACL, pages 281- 288.
Unification-based multimodal parsing. M Johnston, Proc. COLING-ACL'98. COLING-ACL'98Johnston, M. (1998) Unification-based multimodal pars- ing. Proc. COLING-ACL'98.
Cognitive status and form of reference in multimodal human-computer interaction. A Kehler, Proc. AAAI'01. AAAI'01Kehler, A. (2000) Cognitive status and form of reference in multimodal human-computer interaction. Proc. AAAI'01, pages 685-689.
User and discourse models for multimodal communication. W Wahlster, Intelligent User Interfaces. M. Maybury and W. WahlsterWahlster, W. (1998) User and discourse models for mul- timodal communication. In M. Maybury and W. Wahl- ster, editors, Intelligent User Interfaces, pages 359- 370.
Multimodal interaction for information access: Exploiting cohesion. M Zancanaro, O Stock, C Strapparava, Computational Intelligence. 134Zancanaro, M.; Stock, O.; and Strapparava, C. (1997) Multimodal interaction for information access: Ex- ploiting cohesion. Computational Intelligence, 13(4):439-464.
Automated authoring of coherent multimedia discourse for conversation systems. M X Zhou, S Pan, Proc. ACM MM'01. ACM MM'01Zhou, M. X. and Pan, S. (2001) Automated authoring of coherent multimedia discourse for conversation sys- tems. Proc. ACM MM'01, pages 555-559.
Figure 12. Improving alignment for U5 Motivator: DataAnalysis Act: Request Method: Compare Intention Base: House Topic: Collection Focus. MainAspect ConstraintFigure 12. Improving alignment for U5 Motivator: DataAnalysis Act: Request Method: Compare Intention Base: House Topic: Collection Focus: MainAspect Constraint: |
|
259,376,635 | Towards Extracting and Understanding the Implicit Rubrics of Transformer Based Automated Essay Scoring Models | By aligning the functional components derived from the activations of transformer models trained for AES with external knowledge such as human-understandable feature groups, the proposed method improves the interpretability of a Longformer Automated Essay Scoring (AES) system and provides tools for performing such analyses on further neural AES systems. The analysis focuses on models trained to score essays based on ORGANIZATION, MAIN IDEA, SUPPORT, and LANGUAGE. The findings provide insights into the models' decisionmaking processes, biases, and limitations, contributing to the development of more transparent and reliable AES systems. | [
227231681,
196211427,
233181831,
250390536,
198974889,
10986188,
34975990,
51874490,
51878395,
237513725,
24461982
] | Towards Extracting and Understanding the Implicit Rubrics of Transformer Based Automated Essay Scoring Models
July 13, 2023
James Fiacco jfiacco@cs.cmu.edu
Language Technologies Institute
Language Technologies Institute Carnegie Mellon University
Carnegie Mellon University
David Adamson Turnitin
Language Technologies Institute
Language Technologies Institute Carnegie Mellon University
Carnegie Mellon University
Carolyn P Rosé cprose@cs.cmu.edu
Language Technologies Institute
Language Technologies Institute Carnegie Mellon University
Carnegie Mellon University
Towards Extracting and Understanding the Implicit Rubrics of Transformer Based Automated Essay Scoring Models
Proceedings of the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)
the 18th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2023)July 13, 2023
By aligning the functional components derived from the activations of transformer models trained for AES with external knowledge such as human-understandable feature groups, the proposed method improves the interpretability of a Longformer Automated Essay Scoring (AES) system and provides tools for performing such analyses on further neural AES systems. The analysis focuses on models trained to score essays based on ORGANIZATION, MAIN IDEA, SUPPORT, and LANGUAGE. The findings provide insights into the models' decisionmaking processes, biases, and limitations, contributing to the development of more transparent and reliable AES systems.
Introduction
Since its inception over 50 years ago (Page, 1966), Automated Essay Scoring (AES) has been a valuable approach for evaluating large quantities of student essays. Recent developments in the field have sought to harness advanced natural language processing techniques to score essays on par with human raters, achieving significant progress toward that goal (Ramesh and Sanampudi, 2022;Huawei and Aryadoust, 2023;Mizumoto and Eguchi, 2023). The inability to understand the learned representations in deep learning based AES models introduces risk and validity concerns to their widespread use in educational settings (Ding et al., 2020;Kumar et al., 2020Kumar et al., , 2023. In response to this concern, we propose a functional component-based approach to scrutinize the activations of transformer models trained for AES.
The primary goal of this study is to provide a method and tool that can provide a coherent and interpretable understanding of the functions per-formed by these neural models, comparing their overlaps and differences, and aligning the learned functions with human-understandable groups of features 1 . Much in the same way that human evaluators use rubrics to guide their scoring of essays, neural models learn a set of features and connections that, when combined and applied to an essay, repeatably determine the score that they will assign.Through the comparison and contrast of these components across models, we investigate how the models prioritize different aspects of writing and make stride towards unveiling that their learned rubrics are, alongside any underlying biases or limitations that they entail. Ultimately, this in-depth analysis will enhance our understanding of the neural models' decision-making processes, thereby contributing to the development of more transparent and reliable automated essay scoring systems.
Our proposed methodology involves extending the emerging domain of neural network interpretation by using abstract functional components, enabling a robust comparison between probed functional components of a network and independent feature groups. This approach specifically builds upon recent work on neural probes and derived methods, aligning a neural network's activations with external knowledge such as task metadata and implicit features (e.g., parts-of-speech, capitalization, etc.) (Conneau et al., 2018;Belinkov, 2022). We focus our interpretation in the domain of AES where each model in our investigation is trained to score essays based on distinct evaluation traits, namely ORGANIZATION, MAIN IDEA, SUPPORT, and LANGUAGE. To probe these models, the features are drawn from several sources that correspond to concepts of both high and low validity for essay scoring: statistical features of an essay (e.g. number of sentences, number of paragraphs, etc.) (Woods et al., 2017), tree features generated from Rhetorical Structure Theory (RST) (Mann and Thompson, 1987) parses of the essays (Jiang et al., 2019;Fiacco et al., 2022), essay prompt and genre (West-Smith et al., 2018), and a combination of algorithmically derived (Derczynski et al., 2015) and our own human defined style-based word lists. These features provide a lens that while unable to capture all of the capabilities of the models, provide insight into some of the key differences between them. In the following sections, we provide a detailed description of the methodology used for this analysis, discuss the assumptions underpinning the method, and present potential explanations for correlated function/feature pairs through a series of experiments that validate our method's ability to reflect the internal rubric of each of the neural models.
Related Work
From the interpretability angle, the most closely related work to this is that of neural model probes (Shi et al., 2016;Adi et al., 2016;Conneau et al., 2018;Zhu et al., 2018;Kuncoro et al., 2018;Khandelwal et al., 2018) which have frequently being used to test whether a model has learned a set of properties (Ryskina and Knight, 2021;Belinkov, 2022). The primary gap we are working to fill in from this body of literature is that current approaches, with few exceptions (Fiacco et al., 2019;Cao et al., 2021), focus on understanding the roles of individual neurons in the greater neural network. We contend that studying the interpretability of a neural network at the individual neuron level can too easily obscure the broader picture. Our interest lies in further progress incorporating a more abstract perspective on what is learned by neural networks, complementing the work that has been done at the neuron level.
Compared to alternative paradigms for interpretability in machine learning models, such as LIME (Ribeiro et al., 2016) or SHAP (Lundberg and Lee, 2017), which evaluate the contribution of a given feature to the prediction of a model, the functional component based methods allow for a more granular identification of important parts of a model, independent from known features for a task. This can enable model analysts to quickly identify unexplained components and begin to propose alternative pallets of features. Furthermore, the functional components can represent intermediate steps within the neural network which would be unobservable with these alternative methods.
From the educational technologies and Automated Essay Scoring angle, our work primarily applies to the body of deep learning-based AES models such as recurrent neural network models (Jin et al., 2018;Nadeem et al., 2019), convolutional neural network models (Taghipour and Ng, 2016), and transformer models (Sethi and Singh, 2022). While our method could be applied to any type of neural model, we focus on transformers as they represent the state-of-the-art. By integrating the interpretability of neural models with the understanding of the functional components they learn, we hope to bridge the gap between human-understandable features and neural network-based essay scoring. The insights gained from our methodology can guide the development of more effective and efficient AES systems, tailored to the specific needs of educators and students. Furthermore, the lessons learned from this research may extend beyond the AES domain, providing valuable insights for the broader field of natural language processing and machine learning interpretability.
Methods
In this section we present our interpretation approach (Figure 1), defining the key concepts of functional components, functional group, feature, and feature group. Because the approach notably abstracts away from common terms in the neural network literature, throughout this section we draw an analogy to how one can define and describe the common features between mammals by comparing 233 their common and unique characteristics.
Functional Components and Groups
Functional components refer to the learned functions of a neural network, much like a particular component of a dog may be a "dog leg". In a neural AES system, these would be a group of neurons that have correlated activations when varying the input essays. The approach to extracting functional components ("neural pathways" as described by Fiacco et al. (2019)) from a neural network consists of finding the sets of coordinated neuron activations, summarized by the following steps:
1. Save the activations of neurons for each data instance in the validation dataset into an activation matrix, A of size M × N , where M is the number of data instances in the validation set and N is the number of neurons being used for the analysis.
2. Perform a dimensionality reduction, such as Principal Component Analysis (PCA) (Hotelling, 1933), on A to get component activation matrix, T model of size M × P , where P is the number of principal components for a given model.
Functional groups are collections of similar functional components. Continuing the analogy, they would be compared to the more general concept of a "leg". We compute functional groups by concatenating the dimensionality reduced matrixes, T model , of the two models that are to be compared and performing an additional dimensionality reduction over that matrix to get a matrix of group activations, T . The functional components that are highly loaded onto each functional groups are considered members of that group. An important departure from Fiacco et al. (2019), stemming from the limitation that does PCA does not guarantee independence between components, is that we use Independent Component Analysis (ICA) (Comon, 1994) instead. ICA is a dimensionality reduction technique that maximizes the independence between components, resulting in more validity in the technique's resulting alignments.
To determine if a functional group is influential in the performance of the model (designating it an important functional group), we can compute the Pearson's correlation coefficient between each column of the group activation matrix and the pre-dictions of the model, the errors of the model, and the differences between the compared models.
Independent Feature Groups
Features are human understandable attributes that can be extracted from an analysis dataset. In the analogy they would represent potential descriptors of a components of a mammal, e.g. "hairy". In an AES context, these features may manifest as "no capitalization after a period". Ideally, it would be possible to create a direct mapping from each of the functional components to each of the features for which the functional component is related. However, this is non-trivial during a post-hoc analysis because, without interventions, there are limitations on what information is obtainable. Specifically, because features are not necessarily independent from each other, their correlations cannot be separated from each other, yielding imprecise interpretations. It is thus required for only independent features to be used as the unit of analysis when it comes to alignment with functional components. Unfortunately, in practice, this is a prohibitive restriction and most features that would be interesting are going to have correlations.
Fortunately, much in the same way that we can use ICA to extract independent functional components from a neural network's activations, we can use it to construct independent feature groups that can be reasonably be aligned with the functional groups of the neural networks. In the analogy, these independent feature groups can therefore, be thought of as collections of descriptive terms that can identify a characteristic of the mammal, such as "an appendage that comes in pairs and can be walked on" which would align with the "leg" functional group. In AES, an example feature group may be "uses punctuation improperly". It would be expected that this feature group would align well with a functional group in a neural AES system that corresponds with a negative essay score. Furthermore, feature groups for AES can be thought of as being roughly analogous to conditions that would be on an essay scoring rubric (as well as potentially other features that may be intuitive or obvious to human scorers but contribute to accurate scoring).
The specific process used to define these groups is to perform a dimensionality reduction on each set of feature types that may have significant correlations and collecting them into a feature matrix. We do this process for each feature type rather than over all features at once because spurious correlations between some unrelated features may convolute the feature groups, making them far more difficult to interpret.
Alignment
Using ICA as the dimensionality reduction, the independent functional groups of the neural model can reasonably align with the independent feature groups using the following formal procedure: given a neural network, N , with activation matrix, A (as above), a independent component analysis is performed yielding a set of functional components,
F . For each f i , f k ∈ F , f i ⊥ ⊥ f k |X, Y , where X
is the set of inputs to the neural network and Y is the set of predictions from the neural network. With a sufficient number of components such that F contains all independent functional components in A, if there exists a common latent variable in both N and the set of independent feature groups, G, with components g i ∈ G, then there will be some f i ∝ ∼ g j .
Experiments
In this section, we delve into the specific methodology used to analyze the activations of the four transformer models for AES, as well as the steps taken to prepare the data and features for this analysis.
Datasets
Although scoring rubrics are specific to the genre and grade level of a writing task, there are commonalities between each rubric that allow their traits to be reasonably combined for modeling. All our rubrics, for example, include LANGUAGE (and style) and ORGANIZATION traits, though their expectations vary by genre and grade level. The generic MAIN IDEA trait corresponds to "Claim" and "Clarity and Focus" traits, and SUPPORT corresponds to "Support and Development" as well as "Analysis and Evidence." Rubrics and prompts were developed for validity, and essays were rigorously hand-scored by independent raters in the same manner as described in West-Smith et al. (2018).
For each generic trait, the training set was sampled down from over 50,000 available essays, responding to 95 writing prompts. Essays from 77 prompts were selected for the training set, and another 18 were held out for evaluation. Within each split, essays were sampled to minimize imbalance between essay score, genre, grade level, In the un-sampled data, longer essays tend to be strongly correlated with essay score, risking overfitting to this surface feature. Similarly, among the subset of data where school district data was available, districts with predominantly Black enrollment were under-represented among essays with a score of "4" across all traits. To counteract these potential biases, the available data was binned by length and district demographic information for each score, genre, and grade level, and essays were under-sampled from the largest bins. In addition to these balanced essays, about 800 "off topic" essays representing nonsense language or non-academic writing were included in the dataset, with a score of zero.
Models
Longformers are a transformer-based neural network architecture that have gained prominence in various NLP tasks (Beltagy et al., 2020). In the context of AES, each generic trait's model is a Longformer with a single-output regression head, fine-tuned on the trait's balanced dataset: For the remainder of this paper, the model fine-tuned on a given trait will be referred to as "the TRAIT model" (e.g. the ORGANIZATION model) for simplicity.
Although ordinal scores from 0 to 4 were used for sampling and evaluation, the training data labels were continuous, averaged from rater scores. Essays were prefixed with text representing their genre (e.g., "Historical Analysis") and prompt's grade range (e.g., "grades 10-12") before tokenization, but no other context for the writing task (e.g., the prompt's title, instructions, or source material) was included. In addition to Longformer's sliding attention window of 512 tokens, the first and last 32 tokens received global attention.
Scores were rounded back to integers between 0 and 4, before evaluation. On the holdout prompts, overall Quadratic Weighted Kappa (QWK) ranged from 0.784 for MAIN IDEA to 0.839 for LAN-GUAGE, while correlation with word count remained acceptably low: 0.441 for LANGUAGE up to 0.550 for SUPPORT.
The activations of the Longformer model were saved for each instance in the analysis set at the "classify" token to create a matrix of activations for the functional component extraction.
Features
The features employed in this analysis encompass statistical properties of the essays, tree features generated from Rhetorical Structure Theory (RST) parse trees of the essays, essay prompt and genre, a combination of algorithmically derived and humandefined style-based word lists, and certain schoollevel demographic features. A description of each feature type is provided below:
Statistical Features: While statistical features such as essay word count are often good indicators of essay score, they are not intrinsically valuable to the different traits that our models are scoring. We thus want to see lower alignment with these features to indicate that the model is not overly relying on rudimentary shortcuts scoring an essay. We also include average word length, essay paragraph count, essay sentence count, average sentence length, and the standard deviation of the sentence length for completeness.
RST Tree Features: These features were integrated to capture the rhetorical structure of the text, such as the hierarchy of principal and subordinate clauses, the logical and temporal relations between propositions, and the coherence of the argument. These concepts have a high validity for scoring essays (Jiang et al., 2019), especially for ORGANIZATION, so high alignment between functional groups would be expected. To generate RST trees for each essay, we utilize a pretrained RST parser specifically fine-tuned for student writing (Fiacco et al., 2022). We include the presence of an RST relation as a feature as well as relation triplets (REL parent , REL child 1 , REL child 2 ) as tree-equivalent n-gram-like features.
Essay Prompt and Genre: Categorical representations of the essay prompt and genre were employed as features to examine if components of the AES model were preferentially activated based on the content or topic of the essay, a low validity feature.
Algorithmically Generated Word List Features:
We calculate the frequency of usage of words within algorithmically derived sets of words in the essays as a group of features to probe the AES model's consideration for stylistic language. To generate these word lists, we obtain Brown clusters (Brown et al., 1992) from essays. We generate separate Brown clusters for each prompt in our dataset and subsequently derive final word lists based on the overlaps of those clusters. This approach emphasizes common stylistic features as opposed to content-based clusters.
Human Generated Word List Features: In addition to the algorithmically defined word lists, we devise our own word lists that may reflect how the AES model scores essays. We created word lists for the following categories: simple words, informal language, formal language, literary terms, transition words, and words unique to African American Vernacular English (AAVE).
Demographic Features: We used the percent to participants in the National School Lunch Program (NSLP) at a school as a weak proxy for the economic status of a student. Also as weak proxies for economic status of essay authors, we include the school level features of number of students and student teacher ratio. Furthermore, we use a school level distribution of ethnicity statistics as a weak proxy for the ethnic information of an essay's author. These features were employed to investigate the model's perception of any relationship between the writer's background and the quality, content, and style of the essay, in order to gain insight of the equity of the AES model.
Analysis Settings
To choose the number of components for ICA, a PCA was performed to determine how many components explained 95% of the variance of the activation (or 99% of the variance for the features) to be used as the number of components of the ICA. ORGANIZATION MAIN IDEA 119 55 125 22 12 0 10 ORGANIZATION LANGUAGE 96 66 110 29 11 0 18 ORGANIZATION SUPPORT 66 36 68 22 9 1 12 LANGUAGE MAIN IDEA 78 55 93 23 8 3 12 LANGUAGE SUPPORT 34 28 38 13 2 2 9 SUPPORT MAIN IDEA 45 49 64 25 2 2 21 Table 2: Comparing number of functional groups extracted for each model comparison and presenting the number of functional groups that were both deemed important (Section 3.1) and sufficiently aligned with at least one feature group. Also specified is the number of functional groups that are unique to a particular model and the number that are shared between the models of given a comparison pair.
To determine that a functional group was important, it needed to have an absolute value of Pearson's r value of greater than 0.2. This threshold was also used to determine if a functional group should be considered aligned with a feature group.
Results
In this section, we present aggregate statistics for each model comparison when it comes to computing features and independent feature groups (Table 1), extracting functional groups and aligning important functional groups (
Independent Feature Groups
Since each trained model held out a different set of prompts from its training set, common prompts between analysis sets needed to be identified, and thus the number of features extracted and the resulting independent feature groups vary between model comparisons. Computing the independent feature groups for each model comparison (Table 1) yielded between 70% and 77% of the original extracted features for all comparisons, except LAN-GUAGE V SUPPORT, which only yielded 57% as many independent feature groups compared to original features. Despite high variability in the number of independent feature groups identified during the process, a much more narrow range of independent feature groups was aligned during the analysis. The numbers correspond to the IDs of the functional group or feature group that the node represents (see Table 3).
The types of feature groups that were aligned varied considerably between different comparisons.
Functional Component Groups
The initial extraction of functional components for each model elicited numbers of functional components between 28 and 119. Table 1 and 2 show that for a given model, fewer functional components will be extracted given a fewer instances in the analysis dataset. Despite this noise, a clear pattern emerges where the ORGANIZATION model has the most functional components, followed by the LAN-GUAGE model. The MAIN IDEA model has fewer functional components, with the SUPPORT model having the fewest. When performing the dimensionality reduction to compute the functional groups, there is a consistent reduction to approximately 61-71% of the combined total functional components.
Important Functional Groups
Despite the variance in the number of feature groups and functional groups extracted per comparison, there is a remarkably consistent number of Figure 3: Alignment diagram for functional groups (left) that are common to both the LANGUAGE and MAIN IDEA models with their alignment to feature groups (right). Only functional groups and feature groups are shown if they have a positive correlation greater than 0.25 (blue edges) or a negative correlation less than −0.25 (red edges). The numbers correspond to the IDs of the functional group or feature group that the node represents (see Table 3).
important functional groups that have at least one sufficient alignment to a feature group (Table 2). With the exception of the LANGUAGE V SUPPORT comparison, all other comparisons had between 21 and 29 aligned functional groups.
As a visual aid for the important functional groups, see the left sides of Figures 2 and 3. Each Figure is derived from the functional groups and feature groups of the LANGUAGE V MAIN IDEA comparison. The numbers on each node are the identifiers of a given functional group, a subset of which are represented in Table 3.
Alignment of Functional Groups
The entirety of findings from the alignments for all of the comparisons would be too numerous to present in a conference paper format. However, we will present the major trends we found in our analysis. The first main trend is that all models had functional groups that we correlated with the statistical features of the essay. Furthermore, by computing the correlations between the individual features within that type, it was determined that number of paragraphs is likely the most salient contributor.
The second set of trends is presented in Table 4, where the percent of the total aligned feature groups per model was computed. This revealed that the OR-GANIZATION model had considerably more aligned RST-based features than the other models, while the MAIN IDEA model had the least proportion. The LANGUAGE model had the most aligned word list features, which is the combination of the algorithmically and human-created word list features. For the last percentage, we combine the prompt and demographic features and find that the SUP- PORT model tended to align with fewer of these types of features. The reason for combining the demographic and prompt features is discussed in Section 6.
Qualitative Analysis
While the method that we presented can quickly advance one's understanding of a model from the black-box neural network to aligned feature groups directly, understanding what function a feature group represents can be more difficult. It is thus necessary to resolve what a feature group represents to form a strong statement on what the model is doing. For instance, we found it concerning that so many of the models were connected with feature groups that contained demographic features (colored red in Figures 2 and 3). However, a qualitative look at the datasets for which prompts were included, we found that the distribution of prompts over the different schools, when controlling for essay length, were such that certain schools (with their demographic features) were the only source of certain prompts. It, therefore, becomes likely that many of these feature groups are more topicbased rather than the potentially more problematic demographic-based. This interpretation was reinforced by many of the feature groups with demographic information also including prompts (e.g. "Independent Feature Group 29" from Table 3) and by examining essays that present those feature groups.
Discussion
The results presented in the preceding section demonstrate the efficacy of the proposed method in extracting salient feature groups and functional groups from the neural models, particularly when applied to the dataset under consideration. The true potential of this method, however, lies in its capacity to be broadly applied to any neural AES system, thereby facilitating a deeper understanding of the models and the underlying processes they employ.
In the following discussion, we will delve further into the results, emphasizing the prominent trends observed in the alignment of functional groups and their correlation with essay features, as well as the implications of these findings for enhancing the interpretability and transparency of neural AES systems.
Functional Component and Feature Groups
The proposed method successfully extracted meaningful functional groups from the analyzed neural models. Notably, the LANGUAGE V SUPPORT comparison emerged as an outlier in several of our analyses. This discrepancy is likely attributable to the considerably fewer essays shared by both models' analysis sets, which may result in a noisier analysis and expose a limitation of the method. As the size of the analysis increases, one would expect the extraction of feature groups and function groups to approach their ideal independence characteristics. Despite this limitation, the method managed to condense the analysis space from thousands of activations to fewer than 125 while still accounting for over 90% of the model's variance.
Interestingly, the ORGANIZATION model exhibited the highest number of functional groups. This observation suggests that capturing the ORGANIZA-TION trait is a more intricate process, necessitating the learning of additional features. This notion is further corroborated by the comparisons between ORGANIZATION and other models; models which displayed very few, if any, functional groups exclusively present in the non-organization models.
Alignment of Important Functional Groups
In line with our expectations, the ORGANIZATION model demonstrated the greatest alignment with the RST tree features, while the LANGUAGE model displayed the most significant alignment with the word list features. It was postulated that ORGA-NIZATION would necessitate the model to possess knowledge of how ideas within essays are structured in relation to each other, a type of knowledge encoded by rhetorical structure theory. Although the RST parse trees recovered from the parser are considerably noisy (RST parsing of student essay data has been shown to be markedly more challenging than standard datasets (Fiacco et al., 2022)), the signal remained significant. Furthermore, we anticipated that the LANGUAGE model would have a greater reliance on word choice, a concept mirrored by the word list-based feature groups. Contrary to our expectations, the MAIN IDEA model exhibited the highest number of promptbased feature groups. Our most plausible explanation for this observation is that certain prompts might have clearer expectations for thesis statements than others, a notion generally supported by a qualitative examination of the essays from prompts that score higher on MAIN IDEA.
Conclusion
The neural network interpretation technique presented in this paper demonstrates significant promise in learning the implicit rubrics of neural automated essay scoring models. By effectively mapping the intricate relationships between feature groups and the functional groups of the underlying scoring mechanism, the technique provides a step towards an understanding of the factors contributing to a transformer's evaluation of essay quality. This enhanced understanding enables researchers and educators to not only identify potential biases in scoring models, but also to refine their models to ensure a more reliable and fair assessment of student performance.
The code for this method will be released and incorporated into an analysis tool for application to neural models not limited to the ones examined in this work with the goal to pave the way for the development of more transparency in neural AES models. These advancements can contribute to the overarching goal of promoting ethical and responsible AI in education by facilitating the examination and comprehension of complex neural models.
Figure 1 :
1Diagram visualizing the structure of the methodology. Nodes of each color represent correlated values.
Figure 2 :
2Alignment diagram for functional groups (left) that are specific to the MAIN IDEA model with their alignment to feature groups (right). Only functional groups and feature groups are shown if they have a positive correlation greater than 0.25 (blue edges) or a negative correlation less than −0.25 (red edges).
Model A
ModelModel B# Essays Extracted Features # Independent Feature Groups # Aligned IFGORGANIZATION MAIN IDEA
407
148
114
24
ORGANIZATION LANGUAGE
275
118
86
39
ORGANIZATION SUPPORT
144
90
63
37
LANGUAGE
MAIN IDEA
341
129
95
26
LANGUAGE
SUPPORT
72
67
38
23
SUPPORT
MAIN IDEA
260
127
94
27
Table 1 :
1Comparing analysis dataset size and numbers of extracted features for each of the model comparisons, identified by the Model A and Model B columns.
Comp. A # Comp. B # FG # Aligned FG # A Only # B Only # MixedFunctional Group Extraction
Important Functional Group Alignment
Model A
Model B
#
Table 2 )
2, and lastly,
we provide examples taken from the model compar-
ison between the LANGUAGE model and the MAIN
IDEA model. Due to length constraints, we present
detailed examples of this comparison only. Similar
figures and correlation statistics can can be found
on Github 2 .
Table 3 :
3Selected examples of correlated functional group/feature groups. Pearson's R values for relevant importance metric (model difference, model predictions) and feature group alignment are presented with p-values.
Table 4 :
4% of aligned feature groups for a given model
by feature type.
Code and tool available at https://github.com/ jfiacco/aes_neural_functional_groups
https://github.com/jfiacco/aes_ neural_functional_groups/tree/main/ supplementary_results
AcknowledgementsThis work was supported in part by NSF grant DRL 1949110.
ModelErrors:MAINIDEA(+), ModelPairDifference(+), ModelErrors:LANGUAGE(-) Functional Group 56 Predictions:MAINIDEA r = −0.13(p < 0.05) Independent Feature Group 21 r = 0.75(p < 0.001) EssayStats:STDDEVSENTENCELENGTH(+). WordClusterHISTORICALCONFLICTWordCluster:INFORMAL(-)ModelErrors:MAINIDEA(+), ModelPairDifference(+), ModelErrors:LANGUAGE(-) Functional Group 56 Predictions:MAINIDEA r = −0.13(p < 0.05) Independent Feature Group 21 r = 0.75(p < 0.001) EssayStats:STDDEVSENTENCELENGTH(+), Es- sayStats:NUMSENTENCES(+), EssayStats: MEAN- WORDLENGTH(+), EssayStats:NUMWORDS (-), Es- sayStats:NUMPARAGRAPHS(-), EssayStats: MEANSEN- TENCELENGTH(-) Functional Group 92 Predictions:LANGUAGE r = −0.13(p < 0.05) Independent Feature Group 12 r = −0.20(p < 0.001) WordCluster:PRIORITIES(+), WordClus- ter:POPULATIONCOMPARISION(+), WordClus- ter:EFFICIENCY(+), WordCluster:TEENVALUES(-), WordCluster:STORYTELLING(-), WordCluster:SCHOOL (-), WordCluster:PARENTALDECISIONS(-), WordClus- ter:INFORMAL(-), WordCluster:HISTORICALCONFLICT(-)
RST:NN|CONTRAST(+), RST:SN|EVALUATION(NS|ELABORATION, LEAF)(+), RST:SN|BACKGROUND(LEAF, NS|ELABORATION)(+), RST:NS|EVIDENCE(LEAF, NN|CONJUNCTION)(+), RST:NN|JOINT(NN|CONJUNCTION, NN|JOINT)(+), RST:NN|CONTRAST(LEAF, LEAF)(+), RST:NN|CONJUNCTION(NS|ELABORATION, NN|CONJUNCTION)(+), RST:SN|EVALUATION(NN|CONJUNCTION, LEAF)(-), RST:NN|CONJUNCTION. LEAF, LEAFIndependent Feature Group 69 r = 0.22(p < 0.001Independent Feature Group 69 r = 0.22(p < 0.001) RST:NN|CONTRAST(+), RST:SN|EVALUATION(NS|ELABORATION, LEAF)(+), RST:SN|BACKGROUND(LEAF, NS|ELABORATION)(+), RST:NS|EVIDENCE(LEAF, NN|CONJUNCTION)(+), RST:NN|JOINT(NN|CONJUNCTION, NN|JOINT)(+), RST:NN|CONTRAST(LEAF, LEAF)(+), RST:NN|CONJUNCTION(NS|ELABORATION, NN|CONJUNCTION)(+), RST:SN|EVALUATION(NN|CONJUNCTION, LEAF)(-), RST:NN|CONJUNCTION(LEAF, LEAF)(-)
Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. Yossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, Yoav Goldberg, arXiv:1608.04207arXiv preprintYossi Adi, Einat Kermany, Yonatan Belinkov, Ofer Lavi, and Yoav Goldberg. 2016. Fine-grained analysis of sentence embeddings using auxiliary prediction tasks. arXiv preprint arXiv:1608.04207.
Probing classifiers: Promises, shortcomings, and advances. Computational Linguistics. Yonatan Belinkov, 48Yonatan Belinkov. 2022. Probing classifiers: Promises, shortcomings, and advances. Computational Linguis- tics, 48(1):207-219.
Longformer: The long-document transformer. Iz Beltagy, E Matthew, Arman Peters, Cohan, arXiv:2004.05150arXiv preprintIz Beltagy, Matthew E Peters, and Arman Cohan. 2020. Longformer: The long-document transformer. arXiv preprint arXiv:2004.05150.
Classbased n-gram models of natural language. F Peter, Vincent J Della Brown, Pietra, V Peter, Jennifer C Desouza, Robert L Lai, Mercer, Computational linguistics. 184Peter F Brown, Vincent J Della Pietra, Peter V Desouza, Jennifer C Lai, and Robert L Mercer. 1992. Class- based n-gram models of natural language. Computa- tional linguistics, 18(4):467-480.
Low-complexity probing via finding subnetworks. Steven Cao, Victor Sanh, Alexander M Rush, Proceedings of the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2021 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSteven Cao, Victor Sanh, and Alexander M Rush. 2021. Low-complexity probing via finding subnetworks. In Proceedings of the 2021 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 960-966.
Independent component analysis, a new concept? Signal processing. Pierre Comon, 36Pierre Comon. 1994. Independent component analysis, a new concept? Signal processing, 36(3):287-314.
What you can cram into a single $ &!#* vector: Probing sentence embeddings for linguistic properties. Alexis Conneau, Germán Kruszewski, Guillaume Lample, Loïc Barrault, Marco Baroni, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Alexis Conneau, Germán Kruszewski, Guillaume Lam- ple, Loïc Barrault, and Marco Baroni. 2018. What you can cram into a single $ &!#* vector: Probing sentence embeddings for linguistic properties. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 2126-2136.
Tune your brown clustering, please. Leon Derczynski, Sean Chester, Kenneth S Bøgh, ternational Conference Recent Advances in Natural Language Processing. Association for Computational Linguistics2015Leon Derczynski, Sean Chester, and Kenneth S Bøgh. 2015. Tune your brown clustering, please. In In- ternational Conference Recent Advances in Natural Language Processing, RANLP, volume 2015, pages 110-117. Association for Computational Linguistics.
Aoife Cahill, and Torsten Zesch. 2020. Don't take "nswvtnvakgxpm" for an answer-the surprising vulnerability of automatic content scoring systems to adversarial input. Yuning Ding, Brian Riordan, Andrea Horbach, Proceedings of the 28th international conference on computational linguistics. the 28th international conference on computational linguisticsYuning Ding, Brian Riordan, Andrea Horbach, Aoife Cahill, and Torsten Zesch. 2020. Don't take "nswvt- nvakgxpm" for an answer-the surprising vulnerabil- ity of automatic content scoring systems to adversar- ial input. In Proceedings of the 28th international conference on computational linguistics, pages 882- 892.
Deep neural model inspection and comparison via functional neuron pathways. James Fiacco, Samridhi Choudhary, Carolyn Rose, Proceedings of the 57th Conference of the Association for Computational Linguistics. the 57th Conference of the Association for Computational LinguisticsJames Fiacco, Samridhi Choudhary, and Carolyn Rose. 2019. Deep neural model inspection and comparison via functional neuron pathways. In Proceedings of the 57th Conference of the Association for Computa- tional Linguistics, pages 5754-5764.
Toward automatic discourse parsing of student writing motivated by neural interpretation. James Fiacco, Shiyan Jiang, David Adamson, Carolyn Rose, Proceedings of the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022). the 17th Workshop on Innovative Use of NLP for Building Educational Applications (BEA 2022)James Fiacco, Shiyan Jiang, David Adamson, and Car- olyn Rose. 2022. Toward automatic discourse pars- ing of student writing motivated by neural interpreta- tion. In Proceedings of the 17th Workshop on Inno- vative Use of NLP for Building Educational Applica- tions (BEA 2022), pages 204-215.
Analysis of a complex of statistical variables into principal components. Harold Hotelling, Journal of educational psychology. 246417Harold Hotelling. 1933. Analysis of a complex of sta- tistical variables into principal components. Journal of educational psychology, 24(6):417.
A systematic review of automated writing evaluation systems. Education and Information Technologies. Shi Huawei, Vahid Aryadoust, 28Shi Huawei and Vahid Aryadoust. 2023. A system- atic review of automated writing evaluation systems. Education and Information Technologies, 28(1):771- 795.
Applying rhetorical structure theory to student essays for providing automated writing feedback. Shiyan Jiang, Kexin Yang, Chandrakumari Suvarna, Pooja Casula, Mingtong Zhang, Carolyn Rose, Proceedings of the Workshop on Discourse Relation Parsing and Treebanking. the Workshop on Discourse Relation Parsing and TreebankingShiyan Jiang, Kexin Yang, Chandrakumari Suvarna, Pooja Casula, Mingtong Zhang, and Carolyn Rose. 2019. Applying rhetorical structure theory to student essays for providing automated writing feedback. In Proceedings of the Workshop on Discourse Relation Parsing and Treebanking 2019, pages 163-168.
Tdnn: a two-stage deep neural network for promptindependent automated essay scoring. Cancan Jin, Ben He, Kai Hui, Le Sun, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Cancan Jin, Ben He, Kai Hui, and Le Sun. 2018. Tdnn: a two-stage deep neural network for prompt- independent automated essay scoring. In Proceed- ings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1088-1097.
Urvashi Khandelwal, He He, Peng Qi, Dan Jurafsky, arXiv:1805.04623Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprintUrvashi Khandelwal, He He, Peng Qi, and Dan Ju- rafsky. 2018. Sharp nearby, fuzzy far away: How neural language models use context. arXiv preprint arXiv:1805.04623.
Calling out bluff: attacking the robustness of automatic scoring systems with simple adversarial testing. Yaman Kumar, Mehar Bhatia, Anubha Kabra, Jessy Junyi Li, Di Jin, Rajiv Ratn Shah, arXiv:2007.06796arXiv preprintYaman Kumar, Mehar Bhatia, Anubha Kabra, Jessy Junyi Li, Di Jin, and Rajiv Ratn Shah. 2020. Calling out bluff: attacking the robustness of auto- matic scoring systems with simple adversarial testing. arXiv preprint arXiv:2007.06796.
Automatic essay scoring systems are both overstable and oversensitive: Explaining why and proposing defenses. Yaman Kumar, Swapnil Parekh, Somesh Singh, Junyi Jessy Li, Rajiv Ratn Shah, Changyou Chen, Dialogue & Discourse. 141Yaman Kumar, Swapnil Parekh, Somesh Singh, Junyi Jessy Li, Rajiv Ratn Shah, and Changyou Chen. 2023. Automatic essay scoring systems are both overstable and oversensitive: Explaining why and proposing defenses. Dialogue & Discourse, 14(1):1- 33.
Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yogatama, Stephen Clark, Phil Blunsom, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics1Adhiguna Kuncoro, Chris Dyer, John Hale, Dani Yo- gatama, Stephen Clark, and Phil Blunsom. 2018. Lstms can learn syntax-sensitive dependencies well, but modeling structure makes them better. In Pro- ceedings of the 56th Annual Meeting of the Associa- tion for Computational Linguistics (Volume 1: Long Papers), volume 1, pages 1426-1436.
A unified approach to interpreting model predictions. Advances in neural information processing systems. M Scott, Su-In Lundberg, Lee, 30Scott M Lundberg and Su-In Lee. 2017. A unified ap- proach to interpreting model predictions. Advances in neural information processing systems, 30.
Rhetorical structure theory: A theory of text organization. C William, Sandra A Mann, Thompson, University of Southern California, Information Sciences Institute Los AngelesWilliam C Mann and Sandra A Thompson. 1987. Rhetorical structure theory: A theory of text organiza- tion. University of Southern California, Information Sciences Institute Los Angeles.
Exploring the potential of using an ai language model for automated essay scoring. Atsushi Mizumoto, Masaki Eguchi, Research Methods in Applied Linguistics. 22100050Atsushi Mizumoto and Masaki Eguchi. 2023. Exploring the potential of using an ai language model for auto- mated essay scoring. Research Methods in Applied Linguistics, 2(2):100050.
Automated essay scoring with discourseaware neural models. Farah Nadeem, Huy Nguyen, Yang Liu, Mari Ostendorf, Proceedings of the fourteenth workshop on innovative use of NLP for building educational applications. the fourteenth workshop on innovative use of NLP for building educational applicationsFarah Nadeem, Huy Nguyen, Yang Liu, and Mari Osten- dorf. 2019. Automated essay scoring with discourse- aware neural models. In Proceedings of the four- teenth workshop on innovative use of NLP for build- ing educational applications, pages 484-493.
The imminence of... grading essays by computer. The Phi Delta Kappan. B Ellis, Page, 47Ellis B Page. 1966. The imminence of... grading essays by computer. The Phi Delta Kappan, 47(5):238-243.
An automated essay scoring systems: a systematic literature review. Dadi Ramesh, Suresh Kumar Sanampudi, Artificial Intelligence Review. 553Dadi Ramesh and Suresh Kumar Sanampudi. 2022. An automated essay scoring systems: a system- atic literature review. Artificial Intelligence Review, 55(3):2495-2527.
why should i trust you?" explaining the predictions of any classifier. Sameer Marco Tulio Ribeiro, Carlos Singh, Guestrin, Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining. the 22nd ACM SIGKDD international conference on knowledge discovery and data miningMarco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. 2016. " why should i trust you?" explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining, pages 1135- 1144.
Learning mathematical properties of integers. Maria Ryskina, Kevin Knight, Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. the Fourth BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLPMaria Ryskina and Kevin Knight. 2021. Learning math- ematical properties of integers. In Proceedings of the Fourth BlackboxNLP Workshop on Analyzing and In- terpreting Neural Networks for NLP, pages 389-395.
Natural language processing based automated essay scoring with parameter-efficient transformer approach. Angad Sethi, Kavinder Singh, 2022 6th International Conference on Computing Methodologies and Communication (ICCMC). IEEEAngad Sethi and Kavinder Singh. 2022. Natural lan- guage processing based automated essay scoring with parameter-efficient transformer approach. In 2022 6th International Conference on Computing Method- ologies and Communication (ICCMC), pages 749- 756. IEEE.
Why neural translations are the right length. Xing Shi, Kevin Knight, Deniz Yuret, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingXing Shi, Kevin Knight, and Deniz Yuret. 2016. Why neural translations are the right length. In Proceed- ings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 2278-2282.
A neural approach to automated essay scoring. Kaveh Taghipour, Hwee Tou Ng, Proceedings of the 2016 conference on empirical methods in natural language processing. the 2016 conference on empirical methods in natural language processingKaveh Taghipour and Hwee Tou Ng. 2016. A neural approach to automated essay scoring. In Proceed- ings of the 2016 conference on empirical methods in natural language processing, pages 1882-1891.
Trustworthy automated essay scoring without explicit construct validity. Patti West-Smith, Stephanie Butler, Elijah Mayfield, AAAI Spring Symposia. Patti West-Smith, Stephanie Butler, and Elijah Mayfield. 2018. Trustworthy automated essay scoring without explicit construct validity. In AAAI Spring Symposia.
Formative essay feedback using predictive scoring models. Bronwyn Woods, David Adamson, Shayne Miel, Elijah Mayfield, Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining. the 23rd ACM SIGKDD international conference on knowledge discovery and data miningBronwyn Woods, David Adamson, Shayne Miel, and Elijah Mayfield. 2017. Formative essay feedback using predictive scoring models. In Proceedings of the 23rd ACM SIGKDD international conference on knowledge discovery and data mining, pages 2071- 2080.
Exploring semantic properties of sentence embeddings. Xunjie Zhu, Tingfeng Li, Gerard Melo, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational Linguistics2Xunjie Zhu, Tingfeng Li, and Gerard Melo. 2018. Ex- ploring semantic properties of sentence embeddings. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 632-637. |
11,200,252 | Random Graph Model Simulations of Semantic Networks for Associative Concept Dictionaries | Word association data in dictionary form can be simulated through the combination of three components: a bipartite graph with an imbalance in set sizes; a scale-free graph of the Barabási-Albert model; and a normal distribution connecting the two graphs. Such a model makes it possible to simulate the complex features in degree distributions and the interesting graph clustering results that are typically observed for real data. | [] | Random Graph Model Simulations of Semantic Networks for Associative Concept Dictionaries
ManchesterCopyright Manchester2008. August 2008
Hiroyuki Akama akama@dp.hum.titech.ac.jp
Tokyo Institute of Technology
2-12-1 O-okayama Meguro-ku152-8550TokyoJapan
Terry Joyce
Tama University
802 Engyo Fujisawa-shi Kanagawa-ken 252-0805Japan
Jaeyoung Jung
Tokyo Institute of Technology
2-12-1 O-okayama Meguro-ku Tokyo 152-8550Japan
Maki Miyake mmiyake@lang.osaka-u.ac.jp
Osaka University
1-8 Machikaneyama-cho Toyonaka-shi Osaka560-0043Japan
Random Graph Model Simulations of Semantic Networks for Associative Concept Dictionaries
Coling
3rd Textgraphs workshop on Graph-Based Algorithms in Natural Language ProcessingManchester2008. August 2008
Word association data in dictionary form can be simulated through the combination of three components: a bipartite graph with an imbalance in set sizes; a scale-free graph of the Barabási-Albert model; and a normal distribution connecting the two graphs. Such a model makes it possible to simulate the complex features in degree distributions and the interesting graph clustering results that are typically observed for real data.
Modeling background
Associative Concept Dictionaries (ACDs) consist of word pair data based on psychological experiments where the participants are typically asked to provide the semantically-related response word that comes to mind on presentation of a stimulus word. Two well-known ACDs for English are the University of South Florida word association, rhyme and word fragment norms (Nelson et al., 1998) and the Edinburgh Word Association Thesaurus of English (EAT; Kiss et al., 1973). Two ACDs for Japanese are Ishizaki's Associative Concept Dictionary (IACD) (Okamoto and Ishizaki, 2001) and the Japanese Word Association Database (JWAD) (Joyce, 2005(Joyce, , 2006(Joyce, , 2007.
While there are a number of practical applications for ACDs, three are singled out for mention here. The first is in the area of artificial intelligence, where ACDs can contribute to the development of intelligent information retrieval systems for societies requiring increasingly sophisticated navigation methods. A second application is in the field of medicine, where ACDs could be used in developing systems that seek to prevent dementia by checking higher brain functions with a brain dock. Finally, within educational settings, ACDs can greatly facilitate language learning through the manifestation of inherent cultural modes of thinking.
The typical format of an ACD is to list the stimulus words (cue words) and their response words together with some statistics relating to the word pairing. The stimulus words are generally basic words determined in advance by the experimenter, while the response words are semantically associated words provided by respondents on presentation of the stimulus word. The statistics for the word pairing include, for example, measured or calculated indices of distance or perhaps some classification of the semantic relationship between the pair of words.
In order to mathematically analyze the structure of ACDs, the raw association data is often transformed into some form of graph or complex network representation, where the vertices stand for words and the edges indicate an associative relationship (Joyce and Miyake, 2007). However, to our knowledge, there have been no attempts at mathematically simulating an ACD as a way of determining in advance the architectural design of a dictionary. One reason is that it is a major challenge to compute maximum likelihood estimations (MLEs) or Monte-Carlo simulations for graph data (Snijder, 2005). Thus, it is extremely difficult to predict dependences for unknown factors such as the lexical distribution across a predetermined and controllable dictionary framework starting simply from a list of basic words. Accordingly, we propose an easier and more basic approach to constructing an ACD model by combining random graph models to simulate graph features in terms of degree distributions and clustering results.
Degree distributions for ACDs
Typical local skew
It is widely known that Barabási and Albert (1999) have suggested that the degree distributions of scale-free network structures correspond to a power law, expressed as
r d d x P ! = = ) (
(where d stands for degree and r is a small number, such as 2 or 3). This type of distribution is also known as Zipf's law describing the typical frequency distribution of words in a document and plots on a log scale as a falling diagonal stroke. However, in the degree distribution of ACDs, there is always a local skew, as a local peak or bump with a low hemline. Figure 1 The plots indicate a combination of heterogeneous distributions, consisting of a single degree distribution represented as a bell form with a steep slope on the right side. However, what is most interesting here is that throughout the distribution range the curves remain regular and continuous, with an absence of any ruptures or fractures both before and after the local peaks.
When actual ACD data is examined, one finds that as response words are not linked together, almost all the words located in the skewed part are stimulus words (which we refer to as peak words in this study), while the items before the local peak are less frequent response words that have a strong tendency to conform to a decaying distribution. It is therefore relatively natural to divide all word pairs into two types of graph: either a bipartite graph for new response words that are not already part of the stimulus list and a graph that conforms to Zipf's law for the frequencies of response words that are already present in the stimulus list. For the first type, new response words are represented as nodes only with incoming links, generating a bipartite graph with two sets of different sizes. This bipartite graph would exhibit the decaying distribution due to low-frequency response words prior to the local peak. In the second type of graph, response words are represented as nodes with both incoming and outgoing links. This second type is similar to a scale-free graph, such as that incorporated within the Barabási-Albert (BA) model.
Bipartite Graph and BA Model
A bipartite graph is a graph consisting of vertices that are divided into two independent sets, S and R, such that every edge connects to one S vertex and one R vertex. The graph can be represented by an adjacency matrix with diagonal zero submatrices, where the values of the lower right submatrices would all be zero were it not for the appearances of some stimulus words as response words. The lower right section is exactly where the extremely high degrees of hubs are produced, which far exceed the average numbers of response words.
Thus, we adopt an approach to generating a scale-free graph that reflects Zipf's law for frequency distributions. According to the BA model, the probability that a node receives an additional link is proportional to its degree. Here, we implement the principle of preferential attachment formulated by Bollobás (2003): with the addition of one condition that is specific to ACDs, which we explain below. The BA model starts with a small number, 0 m of vertices, and at each time step, T , a new vertex with m edges is added and linked to m different vertices that already exist in the graph. probability that a new vertex will be connected to a vertex i depends on the connectivity of that vertex, as expressed by Equation (1). However, we specifically assume that m is a random natural number that is smaller than 0 m , because in actual data the ratio of stimulus words among all responses words for each stimulus word is obviously far from constant.
! = + = " t T t t x T d x md N x P 1 1 ) ( / ) ( ) ((1)
Moreover, the graph for the BA model here should be regarded as being a directed graph, because the very reason that hubs emerge within semantic network representations of ACDs is that the number of incoming edges is much larger than the expected number of nodes for each possible in-degree. In contrast, out-degree is limited by the number of responses for each stimulus word i , which is represented as ) (i c . Let ) (i c follow a normal distribution with a mean c m and a small variance value 2 ! (which is not constant but nearly so) to smoothly combine the distribution of the bipartite graph and the power distribution. If a directed adjacency matrix for the network exclusively between stimulus words is expressed as
) ( ij B D
, then the sum of the non-zero values for each row in a random bipartite graph introducing new response words will be
! " # i ij B D i C ) ( ) (
(The vertices of stimulus words with the subscript j are linked with the vertex of the stimulus word i). Thus, new response words-words that are not stimulus words-will be randomly allocated within a bipartite graph according to Equation (2):
! " # # = = i ij B D i c r l i P )) ( ) ( ( ) 1 ) , (( 1 (2),
where r is the approximate number of such words. Equation (2) will yield the lower left and the upper right sections of the complete adjacency matrix A for the ACD model. The subsequent sub-matrix t P refers to the transposition of the prior sub-matrix P . The adjacency matrix in Equation (3) represents a pseudo bipartite graph structure where the upper left section is a zero sub-matrix (because there are no intercon-nections among new response words), but the lower right section is not. Here, ij
B (not ) ( ij B D ,
but the undirected counterpart to it), which corresponds to the BA model, is taken as a subsection of the adjacency matrix that must be nondirected for the whole composition.
! ! " # $ $ % & = ij t B P P O A (3)
The key to understanding Equation (3) is to realize that P is conditionally dependent on ij B , because we assume a normal distribution for the number of non-zero values at each row in the lower section of A .
Simulation Results
Taking into account the approximate numbers of possible new response words, in other words, the balance in sizes between the two sets in the bipartite graph, we built a composition of partial random graphs that could represent an adjacency matrix of the ACD model. Figure 2 The degree distribution for the artificial network is consistent with the features observed for actual ACD data, where more than 96% of the stimulus words in each data set are distributed across the peak section of the degree distribution, which is why we have referred to them as peak words. Moreover, it is easy to verify that without the assumption of a normal distribution for ) (i c , distinct fractures emerge in the artificial curve where new response words in the bipartite struc-Local peak ture would be distinguished from stimulus words located at initial points of the local peak.
Markov Clustering of ACDs
MCL
This section introduces the graph clustering method that is applied to both the real and artificial ACD data in order to compare them. Markov Clustering (MCL) proposed by Van Dongen (2001) is well known as a scalable unsupervised cluster algorithm for graphs that decomposes a whole graph into small coherent groups by simulating the probability movements of a random walker across the graph. It is believed that when MCL is applied to semantic networks, it yields clusters of words that share certain similarities in meaning or appear to be related to common concepts.
MCL Results
The clustering results for the ACD model created by combining random graphs reveal that each of the resultant clusters contains only one stimulus word surrounded by several response words. This result is somewhat strange because there are dense connections between stimulus words, which would lead us to assume that clusters would have multiple stimulus word. However, the results of applying MCL clustering to the graph for the ACD model are in reality highly influenced by the sub-structure of the bipartite graph and less dependent on the scale-free structure.
Nevertheless, the result is quite similar to results observed with real data. On examining MCL clustering results for different ACD semantic networks, we have observed that MCL clusters tend to consist of one word node with a relatively high degree and some other words with relatively low degrees. On closer inspection of the graph, it is possible to see several supporter nodes that gather around one leader node, forming a kind of small conceptual community. This suggests that the highest degree word for each cluster becomes a representative for that particular cluster consisting of some other low degree words. In short, MCL clustering is executed based on such high degree words that tend to have relatively low curvature values (Dorow, 2005) compared to their high average degree values.
presents two degree distributions; for the IACD (upper) ( r = 1.8) and the JWAD (lower) ( r = 2.3).
Figure 1 .
1Degree distributions for actual data
i in the process at time t . The
Figure 2 .
2Figure shows, the local peak and the accompanying hemline in the degree distribution are clearly simulated by the complex combination of random graphs. Degree distribution of an ACD model
ConclusionIn this paper, we have proposed a basic approach to simulating word association dictionary data through the application of graph methodologies. This modeling is expected not only to provide insights into the structures of real ACD data, but also to predict, by manipulating the model parameters, possible forms for future ACDs. Future research will focus on constructing an exponential random graph model for ACDs based on Markov Chain Monte Carlo (MCMC) methods.
Emergence of scaling in random networks. Albert-László Barabási, Réka Albert, Science. 286Barabási, Albert-László and Réka Albert. 1999. Emergence of scaling in random networks, Science. 286:509-512.
Mathematical Results on Scalefree Random Graphs. Béla Bollobás, Bollobás, Béla. 2003. Mathematical Results on Scale- free Random Graphs, http://www.stat.berk eley.edu/~aldous/Networks/boll1.pdf
Using Curvature and Markov Clustering in Graphs for Lexical Acquisition and Word Sense Discrimination, MEANING-2005. Beate Dorow, 2nd Workshop organized by the MEANING ProjectDorow, Beate et al. 2005. Using Curvature and Markov Clustering in Graphs for Lexical Acquisi- tion and Word Sense Discrimination, MEANING- 2005,2nd Workshop organized by the MEANING Project, February,3rd-4th.
Capturing the Structures in Association Knowledge: Application of Network Analyses to Large-Scale Databases of Japanese Word Associations, Large-Scale Knowledge Resources. Construction and Application. Terry Joyce, Maki Miyake, Springer VerlagJoyce, Terry and Maki Miyake. 2007. Capturing the Structures in Association Knowledge: Application of Network Analyses to Large-Scale Databases of Japanese Word Associations, Large-Scale Knowl- edge Resources. Construction and Application, Springer Verlag:116-131.
An associative thesaurus of English and its computer analysis. G R Kiss, C Armstrong, R Milroy, J Piper, The Computer and Literary Studies. Aitken, A.J., Bailey, R.W. and Hamilton-Smith, N.Edinburgh University PressKiss, G.R., Armstrong, C., Milroy, R., and Piper, J. 1973. An associative thesaurus of English and its computer analysis, In Aitken, A.J., Bailey, R.W. and Hamilton-Smith, N. (Eds.), The Computer and Literary Studies, Edinburgh University Press.
The University of South Florida word association, rhyme, and word fragment norms. Douglas L Nelson, Cathy L Mcevoy, A Thomas, Schreiber, Nelson, Douglas L., Cathy L. McEvoy, & Thomas A. Schreiber. 1998. The University of South Florida word association, rhyme, and word fragment norms, Retrieved August 31, 2005, from http://www.usf.edu/FreeAssociation
Jun Okamato, Shun Ishizaki, Associative Concept Dictionary and its Comparison Electronic Concept Dictionaries. PACLING2001-4th Conference of the Pacific Association for Computational Linguistics. Okamato, Jun and Shun Ishizaki. 2001. Associative Concept Dictionary and its Comparison Electronic Concept Dictionaries. PACLING2001-4th Confer- ence of the Pacific Association for Computational Linguistics:214-220.
. Tom A B Snijders, Philippa E Pattison, Garry L Robins, Mark S Handcock, New Specifications for Exponential Random Graph Models. Snijders, Tom A.B., Philippa E. Pattison, Garry L.Robins, Mark S. Handcock, 2005. New Specifi- cations for Exponential Random Graph Models, http://stat.gamma.rug.nl/SnijdersPattisonRobinsHa ndcock2006.pdf
The Large-Scale Structure of Semantic Networks, Statistical Analyses and a Model of Semantic Growth. Mark Steyvers, Josh Tenenbaum, Cognitive Science. 291Steyvers, Mark and Josh Tenenbaum. 2005. The Large-Scale Structure of Semantic Networks, Sta- tistical Analyses and a Model of Semantic Growth, Cognitive Science. 29 (1):41-78. |
15,369,498 | A PropBank for Portuguese: the CINTIL-PropBank | With the CINTIL-International Corpus of Portuguese, an ongoing corpus annotated with fully flegded grammatical representation, sentences get not only a high level of lexical, morphological and syntactic annotation but also a semantic analysis that prepares the data to a manual specification step and thus opens the way for a number of tools and resources for which there is a great research focus at the present. This paper reports on the construction of a propbank that builds on CINTIL-DeepGramBank, with nearly 10 thousand sentences, on the basis of a deep linguistic grammar and on the process and the linguistic criteria guiding that construction, which makes possible to obtain a complete PropBank with both syntactic and semantic levels of linguistic annotation. Taking into account this and the promising scores presented in this study for inter-annotator agreement, CINTIL-PropBank presents itself as a great resource to train a semantic role labeller, one of our goals with this project. | [
2486369,
10540932,
16509032,
30354032,
13350236,
252796
] | A PropBank for Portuguese: the CINTIL-PropBank
António Branco antonio.branco@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
Catarina Carvalheiro catarina.carvalheiro@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
Sílvia Pereira silvia.pereira@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
Mariana Avelãs mariana.avelas@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
Clara Pinto clara.pinto@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
Sara Silveira sara.silveira@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
Francisco Costa fcosta@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
João Silva jsilva@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
Sérgio Castro sergio.castro@di.fc.ul.pt
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
João Graça
Departamento de Informática Faculdade de Ciências
University of Lisbon Edifício C6
Universidade de Lisboa Campo Grande
1749-016Portugal
A PropBank for Portuguese: the CINTIL-PropBank
propbankportugueseannotated corpus
With the CINTIL-International Corpus of Portuguese, an ongoing corpus annotated with fully flegded grammatical representation, sentences get not only a high level of lexical, morphological and syntactic annotation but also a semantic analysis that prepares the data to a manual specification step and thus opens the way for a number of tools and resources for which there is a great research focus at the present. This paper reports on the construction of a propbank that builds on CINTIL-DeepGramBank, with nearly 10 thousand sentences, on the basis of a deep linguistic grammar and on the process and the linguistic criteria guiding that construction, which makes possible to obtain a complete PropBank with both syntactic and semantic levels of linguistic annotation. Taking into account this and the promising scores presented in this study for inter-annotator agreement, CINTIL-PropBank presents itself as a great resource to train a semantic role labeller, one of our goals with this project.
Introduction
Following the important methodological breakthrough that took place in Language Technology with the advent of statistical approaches, the development of annotated corpora has been deployed around adding increasingly more complex linguistic information, e.g. concerning phrase constituency (aka TreeBanks (Marcus et al., 1993)), syntactic functions (aka DependencyBanks (Böhmová et al., 2001)), and phrase-level semantic roles (aka PropBanks (Palmer et al., 2005)), just to mention a few salient examples. To keep advancing along this trend and to develop corpora that are annotated with deep linguistic representations, the construction of annotated corpora faces a challenge that demands a new qualitative step: the fully fledged grammatical representation to be assigned to each sentence is so complex and so specific to that sentence that it cannot be reliably crafted manually piece by piece and the annotation cannot be performed without some supporting application, viz. a computational grammar. This paper discusses the solutions we developed to construct a propbank on the basis of a deep linguistic grammar and its companion deep linguistic treebank , with a central goal: the construction of a high quality data set with semantic information that could support the development of automatic semantic role labellers (Baker et al., 2007;Carreras and Màrquez, 2005) for Portuguese. Section 2 reports on the construction of a propbank on the basis of a corpora annotated with a deep linguistic grammar. In Section 3, we describe the extraction of semi-annotated constituency trees with automatic semantic roles that assist the manual completion step of our dynamic propbank, presented in Section 4. In Section 5, we enumerate some aplications of the PropBank, and Section 6 presents the concluding remarks.
A PropBank supported by a deep linguistic grammar
The deep linguistic grammar used for the initial semiautomatic propbanking was LXGram, a grammar for the computational processing of Portuguese Branco and Costa, 2008a;Branco and Costa, 2008b), developed under the grammatical framework of HPSG (Pollard and Sag, 1994) which uses MRS (Copestake et al., 2005) for the representation of meaning and the Grammar Matrix (Bender et al., 2002) for the initial type system. In a first phase, the parses obtained with LXGram and manually selected by human annotators were gathered in the CINTIL-DeepGramBank, a corpus of deep grammatical representations, composed by sentences taken from the CINTIL-International Corpus of Portuguese with 1 million tokens of written and spoken linguistic materials . The construction of the CINTIL-DeepGramBank was performed adopting the annotation procedure where independent annotators produce primary data and their decisions are validated in a subsequent adjudication phase by a third independent annotator. More specifically, each sentence was automatically processed by LX-Suite (Silva, 2007) and analysed by LXGram : once a set of grammatical analysis is obtained (parse forest), two independent annotators choose the analysis each one of them considers to be correct. In case of divergence between their decisions, a third independent adjudicator reviews their options and makes the final choice. The annotators and adjudicators are language experts with post-graduate degrees in Linguistics.
The workbench used to support this process of annotation was [incr tsdb()] (Oepen, 2001), which permits to parse, select and collect fully fledged deep grammatical represen-tations for the respective sentences. Annotation speed is roughly 80 to 100 sentences per day. At the moment, last stable version 3 of the CINTIL-DeepGramBank is composed of 5422 sentences. For this version, the level of interannotator agreement (ITA) scores 0.86 in terms of the specific inter-annotator metric we developed for this kind of corpora and annotation (Castro, 2011). Since the CINTIL-DeepGramBank keeps being developed, we have an additional 4047 sentences in the ongoing version 4, with 0.80 of inter-annotator agreement.
Extracting semi-annotated constituency trees
Propbanks are syntactic constituency treebanks whose trees have their constituents labeled with semantic role tags (Palmer et al., 2005). Propbanks are thus annotated corpora that result from the extension of the annotation associated to the sentences in treebanks by means of an extra layer of linguistic information for semantic roles.
After the manual selection of the correct analyses (described in the previous section), the CINTIL-DeepGramBank was processed in order to obtain only the syntactic constituency trees. 1 To achieve this, the tool lkb2standard was developed to extract these trees from the files exported by [incr tsdb()]. These are trees that are then ready to be extended to form the CINTIL-PropBank, by means of their enrichment with appropriate semantic role tags. Some of the semantic role labels in the tag set used in this PropBank can be obtained directly from the deep grammatical representations and through this extraction tool. This is done by resorting to the feature structures that describe the semantics of the sentence in the CINTIL-DeepGramBank, namely those used to represent the arguments of predicators, ARG1 to ARGn. Furthermore, the extraction tool lkb2standard was designed to play a role that goes beyond the mere extraction of the constituency tree annotated with these ARG1 to ARGn labels. By resorting to the details of the deep grammatical representation, it permits to label phrases with a number of further labels that account for phrases that, on the surface level, are associated with more than one argument (see Figure 1, for example): There are two further tools supporting this manual phase of annotation described below aimed at specifying the semantic role of modifiers: one converts trees into an annotation format compatible with the annotation interface (see Figure 2); and a reverser tool for the inverse operation (transformed trees, such as the one shown in Figure 3). 2 As the outcome of the operation of the first of them, the set of sentences to be annotated can be presented in a spreadsheet file, with each sentence in a different sheet. For each suite of treebanked sentences, a spreadsheet is created with as many sheets as there are sentences in that suite. If a given sentence happens not to have received a parse, its sheet only contains its identification number and that sentence. As we can see in Figure 2, each line has cells automatically filled in, and others to be manually filled in by the annotator. Each line includes: in column (A), the syntactic category and grammatical function; in column (B), the semantic role assigned by the grammar; in (C), the cell to be filled in by the human annotator; in (D), the constituent being tagged, and in (E) the possible observations from the annotator. A completion step followed that consists in the manual specification of the occurrences of this portmanteau tag M in terms of one of the semantic roles available for modifiers in our tag set:
• LOC -Location: to locate an action in place, whether physical or abstract (see Figure 4) • EXT -Extension: to use with strings with an extension notion, mainly numerical. Includes measures, percentages, quantifiers, and comparative expressions (see Figure 4) • CAU -Cause: to determine a cause, a reason of an action (see Figure 5) • TMP -Temporal: to locate an action in time, including the frequency, duration, and repetition (see Figure 4) Figure 1: CINTIL-DeepGramBank constituency tree with semantic M tags highlighted for: Decidimos trabalhar em conjunto e cooperar nas questões delicadas ("We decided to work together and cooperate on the delicate issues"). • PNC -Purpose, goal: to all strings that describe a goal or a proposal of a given action (see Figure 6)
• MNR -Manner: to all strings that specifies the way, manner how an action is realized or due (see Figure 3)
• DIR -Direction: to reference directions, covering both the source/origin and destination (see Figure 4) • POV -Point of View: to strings that expresses an author position about a given event (see Figure 7) • PRED -Secondary predication: to all cases of predicative structures, mainly past participles and resultative constructions
• ADV -Adverbial: to strings that do not fall into any of the other categories (see Figure 3)
At this point, it is important to note that, in the case of attributes and relative clauses with A-M, AP-M and CP-M tags at the constituency level, the tag M (at the third level, the semantic role) is automatically replaced by PRED at this step of conversion (see and compare Figures 1 and 3 for the phrase "nas questões delicadas").
This manual phase of the construction of the PropBank is always done by two independent annotators, who choose the tags each one of them consider to be correct. In case of divergence between annotators, a third independent adjudicator reviews their decisions and makes the final choice. The annotators are experts with post-graduations in Linguistics. The annotation speed is around 200 sentences per day. According to our latest data, from stable version 3 (5422 sentences), the level of inter-annotator agreement is over 0.75 in terms of the k-coefficient. For the ongoing version 4 (with an extra 4047 sentences), the level of interannotator agreement is 0.76.
When this manual propbanking is finalized, the sentences -now extended with the newly assigned tags for the semantic roles of modifiers -are reverted back into the original tree representation. This operation is ensured by a reverting tool that takes the data in the sheets of the spreadsheet and recombines the new information added by the human annotator with the original information (grammatical category and syntactic functions) about the parse tree of the sentence. We have now a complete PropBank with the two information levels: phrase constituency and phrase-level semantic roles. As can be seen in Figure 3, we have now all the M tags replaced by fully specified semantic values: Figure 6: CINTIL-PropBank tree with the semantic role PNC highlighted for: Vamos ter de jogar para os pontos e para as vitórias ("We're going to have to play for points and victories").
Figure 7: CINTIL-PropBank tree with the semantic role POV highlighted for: Para mim, issoé importante ("To me, that's important").
ADV and MNR tags. Recall that, in this case, the PRED tag was automatically assigned at the conversion step that generated the spreadsheet. At this point, with all propbanking guidelines, criteria, and process succinctly described, we are able to attest how do the labels enumerated in previous section work through examples illustrating their assignment (see Figures 4 to 7).
Some applications of the PropBank
It is important to note that with the automatic PropBanking phase it was already possible to extract treebanks and dependencybanks since CINTIL-DeepGramBank already contains the constituency structure with syntactic information, which is enough to extract a treebank, and syntactic functions tags, which can be used to build a dependencybank. With the second PropBanking step -manual specification of semantic role tags -we now have an opportunity to get an added value, a resource to train a semantic role labeller. 3 A semantic role labeller allows to correctly identify
•
ARG1 -Argument 1 e.g. O João deu uma florà Maria. ("João gave Maria a flower.") • ARG2 -Argument 2 e.g. O João deu uma florà Maria. (idem) • ARG3 -Argument 3 e.g. O João deu uma florà Maria. (idem)• ARG11 -Argument 1 of subordinating predicator and Argument 1 in the subordinate clause (semantic function of Subjects of so called Subject Control predicators) e.g. As crianças não querem dormir. ("The children don't want to go to sleep.")• ARG21 -Argument 2 of subordinating predicator and Argument 1 in the subordinate clause (semantic function of Subjects of so called Direct Object Control predicators) e.g. Uma oferta obrigou o João a tomar medidas.("An offer made João take action.")• ARGncp -Argument n in complex predicate constructions e.g. O cliente podia estar mais confiante. ("The client could have been more confident.")• ARGnac -Argument n of anticausative readings e.g. O doente acordou. ("The patient woke up.")4. Manual PropBanking: completing the annotationBuilding on the information made explicit by the deep linguistic grammar, the remaining phrases that are modifiers, associated with non argumental positions, are left with the semantic role tag M, as we can see inFigure 1.
Figure 2 :
2Spreadsheet annotation interface for specifying semantic roles of the M tags for: Decidimos trabalhar em conjunto e cooperar nas questões delicadas ("We decided to work together and cooperate on the delicate issues").
Figure 3 :
3CINTIL-PropBank tree with manual MNR and ADV tags and automatic PRED tag highlighted for: Decidimos trabalhar em conjunto e cooperar nas questões delicadas ("We decided to work together and cooperate on the delicate issues").
Figure 4 :
4CINTIL-PropBank tree with the semantic roles EXT, TMP, LOC, and DIR highlighted for: Só quando sentiu uma mão no ombro levantou os olhos do chão ("Only when he felt a hand on his shoulder did he raise his eyes from the floor").
Figure 5 :
5CINTIL-PropBank tree with the semantic role CAU highlighted for: Eles falharam por várias razões conjunturais ("They failed for many conjunctural reasons").
For a detailed account of the linguistic options that are behind the syntactic constituency, see(Branco et al., 2011).
For a more detailed account of this annotation environment and process, see(Branco et al., 2009).
This application is currently under development and testing with the current version of the CINTIL-PropBank. the various semantic roles in a sentence enabling the recognition of relations between their elements, such as who did what, what happened to whom, etc. With these semantic values, we have a world of new possibilities to improve or create tools and resources for areas such as question answering, information extraction, summarization, machine learning, and information retrieval on the web which opens the possibility for semantic web searching.
Concluding remarksIn this paper, we reported on the solutions we followed to develop a propbank with almost 10 thousand sentences. This propbank was built with the help of a deep linguistic grammar which permitted to construct a high quality and reliable data set with semantic information that will support the training of semantic role labellers for Portuguese. This resource has also the potential to benefit many other natural language processing applications, such as information extraction, question-answering, summarization, machine translation, information retrieval, among others.
Se-mEval'07 task 19: frame semantic structure extraction. Colin Baker, Michael Ellsworth, Katrin Erk, Proceedings of the 4th International Workshop on Semantic Evaluations (SemEval'07, ACL). the 4th International Workshop on Semantic Evaluations (SemEval'07, ACL)Stroudsburg, PA, USAColin Baker, Michael Ellsworth, and Katrin Erk. 2007. Se- mEval'07 task 19: frame semantic structure extraction. In Proceedings of the 4th International Workshop on Se- mantic Evaluations (SemEval'07, ACL), pages 9-104, Stroudsburg, PA, USA.
The Grammar Matrix: An open-source starterkit for the development of cross-linguistically consistent broad-coverage precision grammars. Emily M Bender, Dan Flickinger, Stephan Oepen, Procedings of the Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational Linguistics. Anne Abeilléedings of the Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational LinguisticsTaipei, Taiwan. Alena BöhmováKluwer Academic PublishersTreebanks: Building and Using Syntactically Annotated CorporaEmily M. Bender, Dan Flickinger, and Stephan Oepen. 2002. The Grammar Matrix: An open-source starter- kit for the development of cross-linguistically consis- tent broad-coverage precision grammars. In John Car- roll, Nelleke Oostdijk, and Richard Sutcliffe, editors, Procedings of the Workshop on Grammar Engineering and Evaluation at the 19th International Conference on Computational Linguistics, pages 8-14, Taipei, Taiwan. Alena Böhmová, Jan Hajič, Eva Hajičová, and Barbora Hladká. 2001. The Prague Dependency Treebank: A three-level annotation scenario. In Anne Abeillé, editor, Treebanks: Building and Using Syntactically Annotated Corpora, pages 103-127. Kluwer Academic Publishers.
A computational grammar for deep linguistic processing of portuguese: LXGram, version A.4.1. António Branco, Francisco Costa, TR-2008-17Universidade de Lisboa, Faculdade de Ciências, Departamento de InformáticaTechnical ReportAntónio Branco and Francisco Costa. 2008a. A com- putational grammar for deep linguistic processing of portuguese: LXGram, version A.4.1. Technical Re- port TR-2008-17, Universidade de Lisboa, Faculdade de Ciências, Departamento de Informática.
LXGram in the shared task "comparing semantic representations" of STEP. António Branco, Francisco Costa, Johan Bos and Rodolfo Delmonte, editors, Semantics in Text Processing. STEP 2008 Conference Proceedings. College Publications1António Branco and Francisco Costa. 2008b. LXGram in the shared task "comparing semantic representations" of STEP 2008. In Johan Bos and Rodolfo Delmonte, edi- tors, Semantics in Text Processing. STEP 2008 Confer- ence Proceedings, volume 1 of Research in Computa- tional Semantics, pages 299-314. College Publications.
A deep linguistic processing grammar for Portuguese. António Branco, Francisco Costa, Lecture Notes in Artificial Intelligence. BerlinSpringer6001António Branco and Francisco Costa. 2010. A deep lin- guistic processing grammar for Portuguese. In Lecture Notes in Artificial Intelligence, volume 6001, pages 86- 89. Springer, Berlin.
Dynamic propbanking with deep linguistic grammars. António Branco, Sara Silveira, Sérgio Castro, Mariana Avelãs, Clara Pinto, Francisco Costa, Proceedings, TLT009 -The 8th International Workshop on Treebanks and Linguistic Theories. TLT009 -The 8th International Workshop on Treebanks and Linguistic TheoriesMilanAntónio Branco, Sara Silveira, Sérgio Castro, Mariana Avelãs, Clara Pinto, and Francisco Costa. 2009. Dy- namic propbanking with deep linguistic grammars. In Proceedings, TLT009 -The 8th International Workshop on Treebanks and Linguistic Theories, pages 39-50, Mi- lan.
Developing a deep linguistic databank supporting a collection of treebanks: the CINTIL DeepGramBank. António Branco, Francisco Costa, João Silva, Sara Silveira, Sérgio Castro, Mariana Avelãs, Clara Pinto, João Graça, Proceedings of LREC2010 -The 7th international conference on Language Resources and Evaluation. LREC2010 -The 7th international conference on Language Resources and EvaluationLa Valleta, MaltaAntónio Branco, Francisco Costa, João Silva, Sara Sil- veira, Sérgio Castro, Mariana Avelãs, Clara Pinto, and João Graça. 2010. Developing a deep linguistic data- bank supporting a collection of treebanks: the CINTIL DeepGramBank. In Proceedings of LREC2010 -The 7th international conference on Language Resources and Evaluation, La Valleta, Malta.
CINTIL TreeBank handbook: Design options for the representation of syntactic constituency. António Branco, João Silva, Francisco Costa, Sérgio Castro, TR-2011-02Technical ReportAntónio Branco, João Silva, Francisco Costa, and Sérgio Castro. 2011. CINTIL TreeBank handbook: De- sign options for the representation of syntactic con- stituency. Technical Report TR-2011-02. Available at: http://docs.di.fc.ul.pt/.
Introduction to the CoNLL-2005 shared task: semantic role labeling. Xavier Carreras, Lluís Màrquez, Proceedings of the 9th Conference on Computational Natural Language Learning, CONLL'05. the 9th Conference on Computational Natural Language Learning, CONLL'05Stroudsburg, PA, USAAssociation for Computational LinguisticsXavier Carreras and Lluís Màrquez. 2005. Introduction to the CoNLL-2005 shared task: semantic role label- ing. In Proceedings of the 9th Conference on Compu- tational Natural Language Learning, CONLL'05, pages 152-164, Stroudsburg, PA, USA. Association for Com- putational Linguistics.
Developing reliability metrics and validation tools for datasets with deep linguistic information. Sérgio Castro, Research on Language & Computation. Portugal. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag34Universidade de Lisboa, Faculdade de Ciências, Departamento de InformáticaMinimal recursion semantics: An introductionSérgio Castro. 2011. Developing reliability metrics and validation tools for datasets with deep linguistic informa- tion. Master's thesis, Universidade de Lisboa, Faculdade de Ciências, Departamento de Informática, Portugal. Ann Copestake, Dan Flickinger, Carl Pollard, and Ivan Sag. 2005. Minimal recursion semantics: An introduction. Research on Language & Computation, 3(4):281-332, December.
Building a large annotated corpus of english: The Penn Treebank. Mitchell Marcus, Mary Marcinkiewicz, Beatrice Santorini, Computational Linguistics. 192Mitchell Marcus, Mary Marcinkiewicz, and Beatrice San- torini. 1993. Building a large annotated corpus of en- glish: The Penn Treebank. Computational Linguistics, 19(2):313-330, June.
User manual. Stephan Oepen, Saarbrücken, GermanyComputational Linguistics, Saarland UniversityTechnical reportincr tsdb(. in preparationStephan Oepen. 2001. [incr tsdb()] -competence and performance laboratory. User manual. Technical re- port, Computational Linguistics, Saarland University, Saarbrücken, Germany. in preparation.
The proposition bank: An annotated corpus of semantic roles. Martha Palmer, Daniel Gildea, Paul Kingsbury, Computational Linguistics. 311Martha Palmer, Daniel Gildea, and Paul Kingsbury. 2005. The proposition bank: An annotated corpus of semantic roles. Computational Linguistics, 31(1):71-106, March.
Head-Driven Phrase Structure Grammar. Carl Pollard, Ivan Sag, Chicago University Press and CSLI PublicationsStanfordCarl Pollard and Ivan Sag. 1994. Head-Driven Phrase Structure Grammar. Stanford: Chicago University Press and CSLI Publications.
Out-of-the-box robust parsing of Portuguese. João Silva, António Branco, Sérgio Castro, Ruben Reis, Proceedings of the 9th Encontro para o Processamento Computacional da Língua Portuguesa Escrita e Falada (PROPOR). the 9th Encontro para o Processamento Computacional da Língua Portuguesa Escrita e Falada (PROPOR)João Silva, António Branco, Sérgio Castro, and Ruben Reis. 2010. Out-of-the-box robust parsing of Por- tuguese. In Proceedings of the 9th Encontro para o Pro- cessamento Computacional da Língua Portuguesa Es- crita e Falada (PROPOR), pages 75-85.
Shallow processing of Portuguese: From sentence chunking to nominal lemmatization. João Silva, DI-FCUL-TR-07-16PortugalUniversity of LisbonTechnical ReportMaster's thesis, MSc thesis. Published asJoão Silva. 2007. Shallow processing of Portuguese: From sentence chunking to nominal lemmatization. Master's thesis, MSc thesis, University of Lisbon. Published as Technical Report DI-FCUL-TR-07-16, Portugal. |
8,732,490 | Cross-lingual Projections between Languages from Different Families | Cross-lingual projection methods can benefit from resource-rich languages to improve performances of NLP tasks in resources-scarce languages.However, these methods confronted the difficulty of syntactic differences between languages especially when the pair of languages varies greatly. To make the projection method well-generalize to diverse languages pairs, we enhance the projection method based on word alignments by introducing target-language word representations as features and proposing a novel noise removing method based on these word representations. Experiments showed that our methods improve the performances greatly on projections between English and Chinese. | [
10986188,
2037646,
15279538,
6698104,
1324511
] | Cross-lingual Projections between Languages from Different Families
August 4-9
Mo Yu yumo@mtlab.hit.edu.cn
School of Computer Science and Technology
Harbin Institute of Technology
HarbinChina
Tiejun Zhao tjzhao@mtlab.hit.edu.cn
School of Computer Science and Technology
Harbin Institute of Technology
HarbinChina
Yalong Bai ylbai@mtlab.hit.edu.cn
School of Computer Science and Technology
Harbin Institute of Technology
HarbinChina
Hao Tian tianhao@baidu.com
Baidu Inc
BeijingChina
Dianhai Yu yudianhai@baidu.com
Baidu Inc
BeijingChina
Cross-lingual Projections between Languages from Different Families
Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics
the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAugust 4-9
Cross-lingual projection methods can benefit from resource-rich languages to improve performances of NLP tasks in resources-scarce languages.However, these methods confronted the difficulty of syntactic differences between languages especially when the pair of languages varies greatly. To make the projection method well-generalize to diverse languages pairs, we enhance the projection method based on word alignments by introducing target-language word representations as features and proposing a novel noise removing method based on these word representations. Experiments showed that our methods improve the performances greatly on projections between English and Chinese.
Introduction
Most NLP studies focused on limited languages with large sets of annotated data. English and Chinese are examples of these resource-rich languages. Unfortunately, it is impossible to build sufficient labeled data for all tasks in all languages. To address NLP tasks in resource-scarce languages, cross-lingual projection methods were proposed, which make use of existing resources in resource-rich language (also called source language) to help NLP tasks in resource-scarce language (also named as target language).
There are several types of projection methods. One intuitive and effective method is to build a common feature space for all languages, so that the model trained on one language could be directly used on other languages Täckström et al., 2012). We call it direct projection, which becomes very popular recently. The main limitation of these methods is that target language has to be similar to source language. Otherwise the performance will degrade especially when the orders of phrases between source and target languages differ a lot.
Another common type of projection methods map labels from resource-rich language sentences to resource-scarce ones in a parallel corpus using word alignment information (Yarowsky et al., 2001;Hwa et al., 2005;. We refer them as projection based on word alignments in this paper. Compared to other types of projection methods, this type of methods is more robust to syntactic differences between languages since it trained models on the target side thus following the topology of the target language.
This paper aims to build an accurate projection method with strong generality to various pairs of languages, even when the languages are from different families and are typologically divergent. As far as we know, only a few works focused on this topic (Xia and Lewis 2007;Täckström et al., 2013). We adopted the projection method based on word alignments since it is less affected by language differences. However, such methods also have some disadvantages. Firstly, the models trained on projected data could only cover words and cases appeared in the target side of parallel corpus, making it difficult to generalize to test data in broader domains. Secondly, the performances of these methods are limited by the accuracy of word alignments, especially when words between two languages are not one-one aligned. So the obtained labeled data contains a lot of noises, making the models built on them less accurate. This paper aims to build an accurate projection method with strong generality to various pairs of languages. We built the method on top of projection method based on word alignments because of its advantage of being less affected by syntactic differences, and proposed two solutions to solve the above two difficulties of this type of methods.
Firstly, we introduce Brown clusters of target language to make the projection models cover broader cases. Brown clustering is a kind of word representations, which assigns word with similar functions to the same cluster. They can be efficiently learned on large-scale unlabeled data in target language, which is much easier to acquire even when the scales of parallel corpora of minor languages are limited. Brown clusters have been first introduced to the field of cross-lingual projections in (Täckström et al., 2012) and have achieved great improvements on projection between European languages. However, their work was based on the direct projection methods so that it do not work very well between languages from different families as will be shown in Section 3.
Secondly, to reduce the noises in projection, we propose a noise removing method to detect and correct noisy projected labels. The method was also built on Brown clusters, based on the assumption that instances with similar representations of Brown clusters tend to have similar labels. As far as we know, no one has done any research on removing noises based on the space of word representations in the field of NLP.
Using above techniques, we achieved a projection method that adapts well on different language pairs even when the two languages differ enormously. Experiments of NER and POS tagging projection from English to Chinese proved the effectiveness of our methods.
In the rest of our paper, Section 2 describes the proposed cross-lingual projection method. Evaluations are in Section 3. Section 4 gives concluding remarks.
Proposed Cross-lingual Projection Methods
In this section, we first briefly introduce the crosslingual projection method based on word alignments. Then we describe how the word representations (Brown clusters) were used in the projection method. Section 2.3 describes the noise removing methods.
Projection based on word alignments
In this paper we consider cross-lingual projection based on word alignment, because we want to build projection methods that can be used between language pairs with large differences. Figure 1 shows the procedure of cross-lingual projec-tion methods, taking projection of NER from English to Chinese as an example. Here English is the resource-rich language and Chinese is the target language. First, sentences from the source side of the parallel corpus are labeled by an accurate model in English (e.g., "Rongji Zhu" and "Gan Luo" were labeled as "PER"), since the source language has rich resources to build accurate NER models. Then word alignments are generated from the parallel corpus and serve as a bridge, so that unlabeled words in the target language will get the same labels with words aligning to them in the source language, e.g. the first word '朱(金容)基' in Chinese gets the projected label 'PER', since it is aligned to "Rongji" and "Zhu". In this way, labels in source language sentences are projected to the target sentences. From the projection procedure we can see that a labeled dataset of target language is built based on the projected labels from source sentences. The projected dataset has a large size, but with a lot of noises. With this labeled dataset, models of the target language can be trained in a supervised way. Then these models can be used to label sentences in target language. Since the models are trained on the target language, this projection approach is less affected by language differences, comparing with direct projection methods.
... ... ... ... O inspected 视察 (O) O have 等 (O) O others 吴仪 (O) O and PER Yi 、 (O) PER Wu 罗干 (PER) O , PER Gan 、 (O) PER Luo 朱(金容)基 (PER) O , PER Rongji PER Zhu
Word Representation features for
Cross-lingual Projection
One disadvantage of above method is that the coverage of projected labeled data used for training target language models are limited by the coverage of parallel corpora. For example in Figure 1, some Chinese politicians in 1990's will be learned as person names, but some names of recent politicians such as "Obama", which did not appeared in the parallel corpus, would not be recognized.
Words w i,i∈{−2:2} , w i−1 /w i,i∈{0,1} Cluster c i,i∈{−2:2} , c i−1 /c i,i∈{−1,2} , c −1 /c 1 Transition y −1 /y 0 /{w 0 , c 0 , c −1 /c 1 }
To broader the coverage of the projected data, we introduced word representations as features. Same or similar word representations will be assigned to words appearing in similar contexts, such as person names. Since word representations are trained on large-scale unlabeled sentences in target language, they cover much more words than the parallel corpus does. So the information of a word in projected labeled data will apply to other words with the same or similar representations, even if they did not appear in the parallel data.
In this work we use Brown clusters as word representations on target languages. Brown clustering assigns words to hierarchical clusters according to the distributions of words before and after them. Taking NER as an example, the feature template may contain features shown in Table 1. The cluster id of the word to predict (c 0 ) and those of context words (c i , i ∈ {−2, −1, 1, 2}), as well as the conjunctions of these clusters were used as features in CRF models in the same way the traditional word features were used. Since Brown clusters are hierarchical, the cluster for each word can be represented as a binary string. So we also use prefix of cluster IDs as features, in order to compensate for clusters containing small number of words. For languages lacking of morphological changes, such as Chinese, there are no pre/suffix or orthography features. However the cluster features are always available for any languages.
Noise Removing in Word Representation Space
Another disadvantage of the projection method is that the accuracy of projected labels is badly affected by non-literate translation and word alignment errors, making the data contain many noises. For example in Figure 1, the word "吴仪(Wu Yi)" was not labeled as a named entity since it was not aligned to any words in English due to the alignment errors. A more accurate model will be trained if such noises can be reduced.
A direct way to remove the noises is to modify the label of a word to make it consistent with the majority of labels assigned to the same word in the parallel corpus. The method is limited when a word with low frequency has many of its appearances incorrectly labeled because of alignment errors. In this situation the noises are impossible to remove according to the word itself. The error in Figure 1 is an example of this case since the other few occurrences of the word "吴仪(Wu Yi)" also happened to fail to get the correct label.
Such difficulties can be easily solved when we turned to the space of Brown clusters, based on the observation that words in a same cluster tend to have same labels. For example in Figure 1, the word "吴仪(Wu Yi)", "朱(金容)基(Zhu Rongji)" and "罗干(Luo Gan)" are in the same cluster, because they are all names of Chinese politicians and usually appear in similar contexts. Having observed that a large portion of words in this cluster are person names, it is reasonable to modified the label of "吴仪(Wu Yi)" to "PER".
The space of clusters is also less sparse so it is also possible to use combination of the clusters to help noise removing, in order to utilize the context information of data instances. For example, we could represent a instance as bigram of the cluster of target word and that of the previous word. And it is reasonable that its label should be same with other instances with the same cluster bigrams.
The whole noise removing method can be represented as following: Suppose a target word w i was assigned label y i during projection with probability of alignment p i . From the whole projected labeled data, we can get the distribution p w (y) for the word w i , the distribution p c (y) for its cluster c i and the distribution p b (y) for the bigram c i−1 c i . We choose y i = y , which satisfies y = argmax y (δ y,y i p i + Σ x∈{w,c,b} p x (y)) (1) δ y,y i is an indicator function, which is 1 when y equals to y i . In practices, we set p w/c/b (y) to 0 for the ys that make the probability less than 0.5. With the noise removing method, we can build a more accurate labeled dataset based on the projected data and then use it for training models.
Experimental Results
Data Preparation
We took English as resource-rich language and used Chinese to imitate resource-scarce languages, since the two languages differ a lot. We conducted experiments on projections of NER and POS tagging. The resource-scarce languages were assumed to have no training data. For the NER experiments, we used data from People's Daily (April. 1998) as test data (55,177 sentences). The data was converted following the style of Penn Chinese Treebank (CTB) (Xue et al., 2005). For evaluation of projection of POS tagging, we used the test set of CTB. Since English and Chinese have different annotation standards, labels in the two languages were converted to the universal POS tag set so that the labels between the source and target languages were consistent. The universal tag set made the task of POS tagging easier since the fine-grained types are no more cared.
The Brown clusters were trained on Chinese Wikipedia. The bodies of all articles are retained to induce 1000 clusters using the algorithm in (Liang, 2005) . Stanford word segmentor (Tseng et al., 2005) was used for Chinese word segmentation. When English Brown clusters were in need, we trained the word clusters on the tokenized English Wikipedia.
We chose LDC2003E14 as the parallel corpus, which contains about 200,000 sentences. GIZA++ (Och and Ney, 2000) was used to generate word alignments. It is easier to obtain similar amount of parallel sentences between English and minor languages, making the conclusions more general for problems of projection in real applications. Table 2 shows the performances of NER projection. We re-implemented the direct projection method with projected clusters in (Täckström et al., 2012). Although their method was proven to work well on European language pairs, the results showed that projection based on word alignments (WA) worked much better since the source and target languages are from different families.
Performances of NER Projection
After we add the clusters trained on Chinese Wikipedia as features as in Section 2.2, a great improvement of about 9 points on the average F1score of the three entity types was achieved, showing that the word representation features help to recall more named entities in the test set. The performances of all three categories of named entities were improved greatly after adding word representation features. Larger improvements were observed on person names (14.4%). One of the reasons for the improvements is that in Chinese, person names are usually single words. Thus Brownclustering method can learn good word representations for those entities. Since in test set, most entities that are not covered are person names, Brown clusters helped to increase the recall greatly.
In (Täckström et al., 2012), Brown clusters trained on the source side were projected to the target side based on word alignments. Rather than building a same feature space for both the source language and the target language as in (Täckström et al., 2012), we tried to use the projected clusters as features in projection based on word alignments. In this way the two methods used exactly the same resources. In the experiments, we tried to project clusters trained on English Wikipedia to Chinese words. They improved the performance by about 6.1% and the result was about 20% higher than that achieved by the direct projection method, showing that even using exactly the same resources, the proposed method outperformed that in (Täckström et al., 2012) much on diverse language pairs. Next we studied the effects of noise removing methods. Firstly, we removed noises according to Eq(1), which yielded another huge improvement of about 6% against the best results based on cluster features. Moreover, we conducted experiments to see the effects of each of the three factors. The results show that both the noise removing methods based on words and on clusters achieved improvements between 1.5-2 points. The method based on bigram features got the largest improvement of 3.5 points. It achieved great improvement on person names. This is because a great proportion of the vocabulary was made up of person names, some of which are mixed in clusters with common nouns.
While noise removing method based on clusters failed to recognize them as name entities, cluster bigrams will make use of context information to help the discrimination of these mixed clusters.
Performances of POS Projection
In this section we test our method on projection of POS tagging from English to Chinese, to show that our methods can well extend to other NLP tasks. Unlike named entities, POS tags are associated with single words. When one target word is aligned to more than one words with different POS tags on the source side, it is hard to decide which POS tag to choose. So we only retained the data labeled by 1-to-1 alignments, which also contain less noises as pointed out by (Hu et al., 2011). The same feature template as in the experiments of NER was used for training POS taggers.
The results are listed in Table 4. Because of the great differences between English and Chinese, projection based on word alignments worked better than direct projection did. After adding word cluster features and removing noises, an error reduction of 12.7% was achieved.
POS tagging projection can benefit more from our noise removing methods than NER projection could, i.e. noise removing gave rise to a higher improvement (2.7%) than that achieved by adding cluster features on baseline system (1.5%). One possible reason is that our noise removing methods assume that labels are associated with single words, which is more suitable for POS tagging.
Methods
Accuracy Direct projection (Täckström) 62.71 Projection based on WA 66.68 +clusters (ch wiki) 68.23 +cluster(ch)&noise removing 70.92
Conclusion and perspectives
In this paper we introduced Brown clusters of target languages to cross-lingual projection and proposed methods for removing noises on projected labels. Experiments showed that both the two techniques could greatly improve the performances and could help the projection method well generalize to languages differ a lot. Note that although projection methods based on word alignments are less affected by syntactic differences, the topological differences between languages still remain an importance reason for the limitation of performances of cross-lingual projection. In the future we will try to make use of representations of sub-structures to deal with syntactic differences in more complex tasks such as projection of dependency parsing. Future improvements also include combining the direct projection methods based on joint feature representations with the proposed method as well as making use of projected data from multiple languages.
Figure 1 :
1An example of projection of NER. Labels of Chinese sentence (right) in brackets are projected from the source sentence.
Table 1 :
1NER features. c i is the cluster id of w i .
Table 2 :
2Performances of NER projection.
Table 3 :
3System PER LOC ORG AVG By Eq(1) 59.77 55.56 72.26 62.53 By clusters 49.75 53.10 72.46 58.44 By words 49.00 54.69 70.59 58.09 By bigrams 58.39 55.01 66.88 60.09 Performances of noise removing methods
Table 4 :
4Performances of POS tagging projection.
AcknowledgmentsWe would like to thank the anonymous reviewers for their valuable comments and helpful suggestions. This work was supported by National Natural Science Foundation of China (61173073), and the Key Project of the National High Technology Research and Development Program of China (2011AA01A207).
Class-based n-gram models of natural language. P F Brown, P V Desouza, R L Mercer, V J D Pietra, J C Lai, Computational linguistics. 184P.F. Brown, P.V. Desouza, R.L. Mercer, V.J.D. Pietra, and J.C. Lai. 1992. Class-based n-gram mod- els of natural language. Computational linguistics, 18(4):467-479.
Unsupervised part-ofspeech tagging with bilingual graph-based projections. D Das, S Petrov, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language TechnologiesD. Das and S. Petrov. 2011. Unsupervised part-of- speech tagging with bilingual graph-based projec- tions. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Hu- man Language Technologies, pages 600-609.
Semi-supervised learning framework for cross-lingual projection. P L Hu, M Yu, J Li, C H Zhu, T J Zhao, Web Intelligence and Intelligent Agent Technology (WI-IAT). P.L. Hu, M. Yu, J. Li, C.H. Zhu, and T.J. Zhao. 2011. Semi-supervised learning framework for cross-lingual projection. In Web Intelligence and Intelligent Agent Technology (WI-IAT), 2011
ACM International Conference on. IEEE3IEEE/WIC/ACM International Conference on, vol- ume 3, pages 213-216. IEEE.
Bootstrapping parsers via syntactic projection across parallel texts. R Hwa, P Resnik, A Weinberg, C Cabezas, O Kolak, Natural language engineering. 113R. Hwa, P. Resnik, A. Weinberg, C. Cabezas, and O. Kolak. 2005. Bootstrapping parsers via syntactic projection across parallel texts. Natural language engineering, 11(3):311-326.
Dependency parsing and projection based on word-pair classification. W Jiang, Q Liu, Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics, ACL. the 48th Annual Meeting of the Association for Computational Linguistics, ACL10W. Jiang and Q. Liu. 2010. Dependency parsing and projection based on word-pair classification. In Pro- ceedings of the 48th Annual Meeting of the Associa- tion for Computational Linguistics, ACL, volume 10, pages 12-20.
Semi-supervised learning for natural language. P Liang, Massachusetts Institute of TechnologyPh.D. thesisP. Liang. 2005. Semi-supervised learning for natural language. Ph.D. thesis, Massachusetts Institute of Technology.
Multisource transfer of delexicalized dependency parsers. R Mcdonald, S Petrov, K Hall, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsR. McDonald, S. Petrov, and K. Hall. 2011. Multi- source transfer of delexicalized dependency parsers. In Proceedings of the Conference on Empirical Methods in Natural Language Processing, pages 62-72. Association for Computational Linguistics.
Giza++: Training of statistical translation models. F J Och, H Ney, F.J. Och and H. Ney. 2000. Giza++: Training of statis- tical translation models.
A universal part-of-speech tagset. S Petrov, D Das, R Mcdonald, arXiv:1104.2086arXiv preprintS. Petrov, D. Das, and R. McDonald. 2011. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086.
Cross-lingual word clusters for direct transfer of linguistic structure. O Täckström, R Mcdonald, J Uszkoreit, O. Täckström, R. McDonald, and J. Uszkoreit. 2012. Cross-lingual word clusters for direct transfer of lin- guistic structure.
Target language adaptation of discriminative transfer parsers. O Täckström, J Mcdonald, Nivre, Proceedings of NAACL-HLT. NAACL-HLTO Täckström, R McDonald, and J Nivre. 2013. Tar- get language adaptation of discriminative transfer parsers. Proceedings of NAACL-HLT.
A conditional random field word segmenter for sighan bakeoff. H Tseng, P Chang, G Andrew, D Jurafsky, C Manning, Proceedings of the Fourth SIGHAN Workshop on Chinese Language Processing. the Fourth SIGHAN Workshop on Chinese Language ProcessingJeju Island, Korea171H. Tseng, P. Chang, G. Andrew, D. Jurafsky, and C. Manning. 2005. A conditional random field word segmenter for sighan bakeoff 2005. In Pro- ceedings of the Fourth SIGHAN Workshop on Chi- nese Language Processing, volume 171. Jeju Island, Korea.
Multilingual structural projection across interlinear text. F Xia, Lewis, Proc. of the Conference on Human Language Technologies (HLT/NAACL 2007). of the Conference on Human Language Technologies (HLT/NAACL 2007)F Xia and W Lewis. 2007. Multilingual struc- tural projection across interlinear text. In Proc. of the Conference on Human Language Technologies (HLT/NAACL 2007), pages 452-459.
The penn chinese treebank: Phrase structure annotation of a large corpus. N Xue, F Xia, F D Chiou, M Palmer, Natural Language Engineering. 112207N. Xue, F. Xia, F.D. Chiou, and M. Palmer. 2005. The penn chinese treebank: Phrase structure annotation of a large corpus. Natural Language Engineering, 11(2):207.
Inducing multilingual text analysis tools via robust projection across aligned corpora. D Yarowsky, G Ngai, R Wicentowski, Proceedings of the first international conference on Human language technology research. the first international conference on Human language technology researchAssociation for Computational LinguisticsD. Yarowsky, G. Ngai, and R. Wicentowski. 2001. Inducing multilingual text analysis tools via robust projection across aligned corpora. In Proceedings of the first international conference on Human lan- guage technology research, pages 1-8. Association for Computational Linguistics. |
15,770,107 | The Simple Truth about Dependency and Phrase Structure Representations An Opinion Piece | There are many misconceptions about dependency representations and phrase structure representations for syntax. They are partly due to terminological confusion, partly due to a lack of meta-scientific clarity about the roles of representations and linguistic theories. This opinion piece argues for a simple but clear view of syntactic representation. | [
7140689,
5151364,
6434733
] | The Simple Truth about Dependency and Phrase Structure Representations An Opinion Piece
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2010. 2010
Owen Rambow rambow@ccls.columbia.edu
CCLS
Columbia University New York
NYUSA
The Simple Truth about Dependency and Phrase Structure Representations An Opinion Piece
Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the ACL
Los Angeles, California; cAssociation for Computational LinguisticsJune 2010. 2010
There are many misconceptions about dependency representations and phrase structure representations for syntax. They are partly due to terminological confusion, partly due to a lack of meta-scientific clarity about the roles of representations and linguistic theories. This opinion piece argues for a simple but clear view of syntactic representation.
Introduction
To the machine learning community, treebanks are just collections of data, like pixels with captions, structural and behavioral facts about genes, or observations about wild boar populations. In contrast, to us computational linguists, treebanks are not naturally occurring data at all: they are the result of a very complex annotation process. While the text that is annotated (usually) is naturally occurring, the annotation itself is already the result of a scientific activity. This opinion piece argues that the level of discourse about treebanks often found in our community does not reflect this fact (presumably due to the influence of the brute machine learning perspective). We, as a community of computational linguists, need to be very precise when talking about treebanks and syntactic representations in general.
So let's start with three very important concepts which we must always distinguish. The representation type: what type of mathematical object is used to represent syntactic facts? In this opinion piece, I only consider dependency trees (DTs) and phrase structure trees (PSTs) (Section 2). The represented syntactic content: the morphological and syntactic facts of the analyzed sentence (Section 3). The syntactic theory: it explains how syntactic content is represented in the chosen representation type (Section 4).
A crucial confusing factor is the fact that the terms dependency and phrase structure both have both a mathematical and a linguistic meaning. The mathematical meaning refers representation types. The linguistic meaning refers to syntactic content. I discuss this issue in Section 3. I discuss the issue of converting between DTs and PSTs in Section 5, as an example of how my proposed conceptualization of syntactic representation throws light on a computational problem.
This opinion piece will be a success if after reading it, the reader concludes that actually he or she knew this all along. In fact, this opinion piece does not advocate for a controversial position; its mission is to make its readers be more precise when talking about syntactic representations. This opinion piece is intentionally polemical for rhetorical reasons.
DTs and PSTs as Representation Types
Assume we have two disjoint symbol sets: a set of terminal symbols which contains the words of the language we are describing; and a set of nonterminal symbols. A Dependency Tree (DT) is a tree in which all nodes are labeled with words (elements of the set of terminal symbols) or empty strings. A Phrase Structure Tree (PST) is a tree in which all and only the leaf nodes are labeled with words or empty strings, and internal nodes are labeled with nonterminal symbols. There is nothing more to the definitions. Trees of both types can have many other properties which are not part of the two definitions, and which do not follow from the definitions. I mention some such properties.
Unordered trees. DTs and PSTs can be ordered or unordered. For example, the Prague Theory (Sgall et al., 1986) uses unordered DTs at the deeper level of representation and ordered DTs at a more surfacy level. GPSG (Gazdar et al., 1985) uses unordered trees (or at any rate context-free rules whose righthand side is ordered by a separate component of the grammar), as does current Chomskyan theory (the PST at spell-out may be unordered).
Empty categories. Empty categories can be empty pronouns, or traces, which are co-indexed with a word elsewhere in the tree. Empty pronouns are widely used in both DT-and PST-based representations. While most DT-based approaches do not use traces, Lombardo and Lesmo (1998) do; and while traces are commonly found in PST-based approaches, there are many that do not use them, such as the c-structure of LFG.
Discontinuous Constituents or Non-Projectivity. Both types of trees can be used with or without discontinuous constituents; PSTs are more likely to use traces to avoid discontinuous constituents, but linguistic proposals for PSTs with discontinuous constituents have been made (work by McCawley, or (Becker et al., 1991)).
Labeled Arcs. In DTs, arcs often have labels; arcs in PSTs usually do not, but we can of course label PST arcs as well, as is done in the German TIGER corpus.I note that in both DTs and PSTs we can represent the arc label as a feature on the daughter node, or as a separate node.
Syntactic Content
While there is lots of disagreement about the proper representation type for syntax, there is actually a broad consensus among theoretical and descriptive syntacticians of all persuasions about the range of syntactic phenomena that exist. What exactly is this content, then? It is not a theory-neutral representation of syntax (Section 4). Rather, it is the empirical matter which linguistic theory attempts to represent or explain. We cannot represent it without a theory, but we can refer to it without a theory, using names such as control constructions or transitive verb. In the same manner, we use the word light and physicists will agree on what the phenomenon is, but we cannot represent light within a theory without choosing a representation as either particles or wave.
Note that in linguistics, the terms dependency and phrase structure refer to syntactic content, i.e., syntactic facts we can represent. Syntactic dependency is direct relation between words. Usually, this relation is labeled (or typed), and is identical to (or subsumes) the notion of grammatical function, which covers relations such as SUBJECT, OB-JECT, TEMPORAL-ADJUNCT and so forth. Syntactic phrase structure, also known as syntactic constituency structure is recursive representation using sets of one or more linguistic units (words and empty strings), such that at each level, each set (constituent) acts as a unit syntactically. Linguistic phrase structure is most conveniently expressed in a phrase structure tree, while linguistic dependency is most conveniently expressed in a dependency tree. However, we can express the same content in either type of tree! For example, the English Penn Treebank (PTB) encodes the predicate-argument structure of English using structural conventions and special nonterminal labels ("dashtags"), such as NP-SBJ. And a dependency tree represents constituency: each node can be interpreted both as a preterminal node (X 0 ) and as a node heading a constituent containing all terminals included in the subtree it heads (the XP). Of course, what is more complex to encode in a DT are intermediate projections, such as VP. I leave a fuller discussion aside for lack of space, but I claim that the syntactic content which is expressed in intermediate projections can also be expressed in a DT, through the use of features and arc labels.
Syntactic Theory
The choice of representation type does not determine the representation for a given sentence. This is obvious, but it needs to be repeated; I have heard "What is the DT for this sentence?" one too many times. There are many possible DTs and PSTs, proposed by serious syntacticians, for even simple sen-tences, even when the syntacticians agree on what the syntactic content (a transitive verb with SVO order, for example) of the analysis should be! What is going on?
In order to make sense of this, we need a third player in addition to the representation type and the content. This is the syntactic theory. A linguistic theory chooses a representation type and then defines a coherent mapping for a well-defined set of content to the chosen representation type. Here, "coherent representation" means that the different choices made for conceptually independent content are also representationally independent, so that we can compose representational choices. Note that a theory can decide to omit some content; for example, we can have a theory which does not distinguish raising from control (the English PTB does not).
There are different types of syntactic theories. A descriptive theory is an account of the syntax of one language. Examples of descriptive grammars include works such as Quirk for English, or the annotation manuals of monolingual treebanks, such as (Marcus et al., 1994;Maamouri et al., 2003). The annotation manual serves two purposes: it tells the annotators how to represent a syntactic phenomenon, and it tells the users of the treebank (us!) how to interpret the annotation. A treebank without manual is meaningless. And an arborescent structure does not mean the same thing in all treebanks (for example, a "flat NP" indicates an unannotated constituent in the English ATB but a fully annotated construction in the Arabic Treebank is).
An explanatory theory is a theory which attempts to account for the syntax of all languages, for example by reducing their diversity to a set of principles and finite-valued parameters. Linguistic theories (and explanatory theories in particular) often take the form of a one-to-many mapping from a simple representation of syntactic dependency (predicateargument structure) to a structural representation that determines surface word order. The linguistic theory itself is formulated as a (computational) device that relates the deeper level to the more surfacy level. LFG has a very pure expression of this approach, with the deeper level expressed using a DT (actually, dependency directed acyclic graphs, but the distinction is not relevant here), and the surfacy level expressed using a PST. But the Chomskyan approaches fit the same paradigm, as do many other theories of syntax.
Therefore, there is no theory-neutral representation of a sentence or a set of sentences, because every representation needs a theory for us to extract its meaning! Often what is meant by "theory-neutral tree" is a tree which is interpreted using some notion of consensus theory, perhaps a stripped-down representation which omits much content for which there is no consensus on how to represent it.
Converting Between DTs and PSTs
Converting a set of DS annotations to PS or vice versa means that we want to obtain a representation which expresses exactly the same content. This is frequently done these days as interest in dependency parsing grows but many languages only have PS treebanks. However, this process is often not understood.
To start, I observe that uninterpreted structures (i.e., structures without a syntactic theory, or trees from a treebank without a manual) cannot be converted from or into, as we do not know what they mean and we cannot know if we are preserving the same content or not. Now, my central claim about the possibility of automatically converting between PSTs and DTs is the following. If we have an interpretation for the source representation and the goal representation (as we must in order for this task to be meaningful), then we can convert any facts that are represented in the source structure, and we cannot convert any facts that are not represented in the source structure. It is that simple. If we are converting from a source which contains less information than the target, then we cannot succeed. For example, if we are converting from a PS treebank that does not distinguish particles from prepositions to a DS treebank that does, then we will fail. General claims about the possibility of conversion ("it is easier to convert PS to DS than DS to PS") are therefore meaningless. It only matters what is represented, not how it is represented.
There is, however, no guarantee that there is a simple algorithm for conversion, such as a parametrized head percolation algorithm passed down from researcher to researcher like a sorcerer's incantation. In general, if the two representations are independently devised and both are linguistically motivated, then we have no reason to believe that the conversion can be done using a specific simple approach, or using conversion rules which have some fixed property (say, the depth of the trees in the rules templates). In the general case, the only way to write an automatic converter between two representations is to study the two annotation manuals and to create a case-by-case converter, covering all linguistic phenomena represented in the target representation.
Machine learning-based conversion (for example, (Xia and Palmer, 2001)) is an interesting exercise, but it does not give us any general insights into dependency or phrase structure. Suppose the source contains all the information that the target should contain. Then if machine learning-based conversion fails or does not perform completely correctly, the exercise merely shows that the machine learning is not adequate. Now suppose that the source does not contain all the information that the target should contain. Then no fancy machine learning can ever provide a completely correct conversion. Also, note that unlike, for example, parsers which are based on machine learning and which learn about a natural phenomenon (language use), machine learning of conversion merely learns an artificial phenomenon: the relation between the two syntactic theories in question, which are created by researchers. (Of course, in practice, machine learning of automatic conversion between DT to PSTs is useful.)
Conclusion
I have argued that when talking about dependency and phrase structure representations, one should always distinguish the type of representation (dependency or phrase structure) from the content of the representation, and one needs to understand (and make explicit if it is implicit) the linguistic theory that relates content to representation. Machine learning researchers have the luxury of treating syntactic representations as mere fodder for their mills; we as computational linguists do not, since this is our area of expertise.
AcknowledgmentsI would like to thank my colleagues on the Hindi-Urdu treebank project(Bhatt et al., 2009) (NSF grant CNS-0751089) for spirited discussions about the issues discussed here. I would like to thank Sylvain Kahane, Yoav Goldberg, and Joakim Nivre for comments that have helped me improve this paper. The expressed opinions have been influenced by far too many people to thank individually here.
. Tilman Becker, Aravind Joshi, Owen Rambow, Tilman Becker, Aravind Joshi, and Owen Rambow.
Long distance scrambling and tree adjoining grammars. Fifth Conference of the European Chapter of the Association for Computational Linguistics (EACL'91). ACLLong distance scrambling and tree adjoining gram- mars. In Fifth Conference of the European Chapter of the Association for Computational Linguistics (EACL'91), pages 21-26. ACL.
A multi-representational and multi-layered treebank for hindi/urdu. Rajesh Bhatt, Bhuvana Narasimhan, Martha Palmer, Owen Rambow, Dipti Sharma, Fei Xia, Proceedings of the Third Linguistic Annotation Workshop. Suntec, Singapore. Gerald Gazdar, Ewan Klein, Geoffrey Pullum, and Ivan Sagthe Third Linguistic Annotation WorkshopCambridge, MassHarvard University PressGeneralized Phrase Structure GrammarRajesh Bhatt, Bhuvana Narasimhan, Martha Palmer, Owen Rambow, Dipti Sharma, and Fei Xia. 2009. A multi-representational and multi-layered treebank for hindi/urdu. In Proceedings of the Third Linguistic Anno- tation Workshop, pages 186-189, Suntec, Singapore. Gerald Gazdar, Ewan Klein, Geoffrey Pullum, and Ivan Sag. 1985. Generalized Phrase Structure Grammar. Harvard University Press, Cambridge, Mass.
Formal aspects and parsing issue of dependency theory. Vincenzo Lombardo, Leonardo Lesmo, 36th Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics (COLING-ACL'98). Montréal, CanadaVincenzo Lombardo and Leonardo Lesmo. 1998. For- mal aspects and parsing issue of dependency theory. In 36th Meeting of the Association for Computational Lin- guistics and 17th International Conference on Compu- tational Linguistics (COLING-ACL'98), pages 787-793, Montréal, Canada.
Arabic treebank: Part 1 v 2.0. Distributed by the Linguistic Data Consortium. Mohamed Maamouri, Ann Bies, Hubert Jin, Tim Buckwalter, LDC Catalog. Mohamed Maamouri, Ann Bies, Hubert Jin, and Tim Buckwalter. 2003. Arabic treebank: Part 1 v 2.0. Dis- tributed by the Linguistic Data Consortium. LDC Cata- log No.: LDC2003T06.
The Penn Arabic Treebank: Building a largescale annotated arabic corpus. Mohamed Maamouri, Ann Bies, Tim Buckwalter, ; M Marcus, G Kim, M Marcinkiewicz, R Macintyre, A Bies, M Ferguson, K Katz, B Schasberger, NEMLAR Conference on Arabic Language Resources and Tools. Cairo, EgyptMohamed Maamouri, Ann Bies, and Tim Buckwalter. 2004. The Penn Arabic Treebank: Building a large- scale annotated arabic corpus. In NEMLAR Conference on Arabic Language Resources and Tools, Cairo, Egypt. M. Marcus, G. Kim, M. Marcinkiewicz, R. MacIntyre, A. Bies, M. Ferguson, K. Katz, and B. Schasberger.
The Penn Treebank: Annotating predicate argument structure. Proceedings of the ARPA Human Language Technology Workshop. the ARPA Human Language Technology WorkshopThe Penn Treebank: Annotating predicate argu- ment structure. In Proceedings of the ARPA Human Lan- guage Technology Workshop.
Dependency Syntax: Theory and Practice. Igor A , New YorkState University of New York PressIgor A. Mel'čuk. 1988. Dependency Syntax: Theory and Practice. State University of New York Press, New York.
The meaning of the sentence and its semantic and pragmatic aspects. P Sgall, E Hajičová, J Panevová, Reidel, DordrechtP. Sgall, E. Hajičová, and J. Panevová. 1986. The mean- ing of the sentence and its semantic and pragmatic as- pects. Reidel, Dordrecht.
Converting dependency structure to phrase structures. Fei Xia, Martha Palmer, hlt2001Fei Xia and Martha Palmer. 2001. Converting depen- dency structure to phrase structures. In hlt2001, pages 61-65. |
909,576 | A Hybrid Approach to Chinese Word Segmentation around CRFs | In this paper, we present a Chinese word segmentation system which is consisted of four components, i.e. basic segmentation, named entity recognition, error-driven learner and new word detector. The basic segmentation and named entity recognition, implemented based on conditional random fields, are used to generate initial segmentation results. The other two components are used to refine the results. Our system participated in the tests on open and closed tracks of Beijing University (PKU) and Microsoft Research (MSR). The actual evaluation results show that our system performs very well in MSR open track, MSR closed track and PKU open track. | [
10649571,
2673781
] | A Hybrid Approach to Chinese Word Segmentation around CRFs
Jun-Sheng Zhou zhoujs@nlp.nju.edu.cn
Department of Computer Science and Technology
Nanjing University
210093NanjingCHINA
Xin-Yu Dai
Department of Computer Science and Technology
Nanjing University
210093NanjingCHINA
Deptartment of Computer Science
Nanjing Normal University
210097NanjingCHINA
Rui-Yu Ni
Department of Computer Science and Technology
Nanjing University
210093NanjingCHINA
Chen Jia-Jun chenjj@nlp.nju.edu.cn
Department of Computer Science and Technology
Nanjing University
210093NanjingCHINA
A Hybrid Approach to Chinese Word Segmentation around CRFs
In this paper, we present a Chinese word segmentation system which is consisted of four components, i.e. basic segmentation, named entity recognition, error-driven learner and new word detector. The basic segmentation and named entity recognition, implemented based on conditional random fields, are used to generate initial segmentation results. The other two components are used to refine the results. Our system participated in the tests on open and closed tracks of Beijing University (PKU) and Microsoft Research (MSR). The actual evaluation results show that our system performs very well in MSR open track, MSR closed track and PKU open track.
Introduction
Word segmentation is the first step in Chinese NLP, but segmentation of the Chinese text into words is a nontrivial task. Three difficult tasks, i.e. ambiguities resolution, named entity recognition and new word identification, are the key problems to word segmentation in Chinese.
In this paper, we report a Chinese word segmentation system using a hybrid strategy. In our system, texts are segmented in four steps: basic segmentation, named entity recognition, error-driven learning and new word detection. The implementations of basic segmentation component and named entity recognition component are both based on conditional random fields (CRFs) (Lafferty et al., 2001), while the Error-Driven learning component and new word detection component use statistical and rule methods. We will describe each of these steps in more details below.
System Description
Basic segmentation
We implemented the basic segmentation component with linear chain structure CRFs. CRFs are undirected graphical models that encode a conditional probability distribution using a given set of features. In the special case in which the designated output nodes of the graphical model are linked by edges in a linear chain, CRFs make a first-order Markov independence assumption among output nodes, and thus correspond to finite state machines (FSMs). CRFs define the conditional probability of a state sequence given an input sequence as
T t K k t t k k o t o s s f Z o s P 1 1 1 ) , , , ( exp 1 ) | (
Where is an arbitrary feature function over its arguments, and is a learned weight for each feature function.
) , , , ( 1 t o s s f t t k
Based on CRFs model, we cast the segmentation problem as a sequence tagging problem. Different from (Peng et al., 2004), we represent the positions of a hanzi (Chinese character) with four different tags: B for a hanzi that starts a word, I for a hanzi that continues the word, F for a hanzi that ends the word, S for a hanzi that occurs as a single-character word. The basic segmentation is a process of labeling each hanzi with a tag given the features derived from its surrounding context. The features used in our experiment can be broken into two categories: character features and word features. The character features are instantiations of the following templates, similar to those described in (Ng and Jin, 2004), C refers to a Chinese hanzi.
(a) Cn (n 2, 1,0,1,2 ) (b) CnCn+1( n 2, 1,0,1) (c) C 1C1 (d) Pu(C0 )
In addition to the character features, we came up with another type word context feature which was found very useful in our experiments. The feature captures the relationship between the hanzi and the word which contains the hanzi. For a two-hanzi word, for example, the first hanzi " " within the word " " will have the feature WC0=TWO_F set to 1, the second hanzi " " within the same word " " will have the feature WC0=TWO_L set to 1. For the three-hanzi word, for example, the first hanzi " " within a word " " will have the feature WC0=TRI_F set to 1, the second hanzi " " within the same word " " will have the feature WC0=TRI_M set to 1, and the last hanzi " " within the same word " " will have the feature WC0=TRI_L set to 1. Similarly, the feature can be extended to a four-hanzi word.
Named Entity recognition
After basic segmentation, a great number of named entities in the text, such as personal names, location names and organization names, are not yet segmented and recognized properly. So the second step we take is named entity recognition based on CRFs. In contrast to Chinese personal names and location name, the recognition of Chinese organization names is a difficult task. Especially in Microsoft Research corpus, the whole organization name, such as " ", " " and so on, is regarded as a single word. In this section, we only present our approach for organization name recognition.
The important factor in applying CRFs model to organization name recognition is how to select the proper features set. The constitution of Chinese organization is very complicated, and most organization names do not have any common structural characteristics except for containing some feature words, such as and so on. But as a proper noun, the occurrence of an organization name has the specific context. The context information of organization name mainly includes the boundary words and some title words (e.g.
). By analyzing a large amount of organization name corpus, we find that the indicative intensity of different boundary words vary greatly. So we divide the left and right boundary words into two classes according to the indicative intensity. Accordingly we construct the four boundary words lexicons. To solve the problem of the selection and classification of boundary words, we make use of mutual Information I(x, y). If there is a genuine association between x and y, then I(x,y) >> 0. If there is no interesting relationship between x and y, then I(x,y) 0. If x and y are in complementary distribution, then I(x,y) << 0. By using mutual information, we compute the association between boundary word and the type of organization name, then select and classify the boundary words.
In order to increase the precision of organization name recognition, we still introduce the "forbidden word" feature that would prevent some words from being recognized as component of organization name.
For we know that some words, such as " " " ", are impossible to occur in organization name, we collected these words to formed a "forbidden words" lexicon. Based on the consideration given in preceding section, we constructed a set of atomic feature patterns, listed in table 2. Additionally, we defined a set of conjunctive feature patterns, which could form effective feature conjunctions to express complicated contextual information.
Error-driven learning
As a method based on statistics, no matter how well a CRFs model is constructed, some obviously errors always occurred because of the sparseness of training data. For this reson, error-driven learning method (Brill, 1995) is adopted to refine the segmentation result in this bakeoff in three steps:
1) Based on CRFs model, we segment the training data which has been removed all the space between words. Based on the comparison of the segmentation result with the original training data, the difference between them will be extracted. If a difference occurs more than one time, an error-driven rule will be constructed. The rule is described as:
is the segmentation of in training data. We named this rule set constructed by this step CRF-Ruleset.
2) Based on FMM&BMM, we segment the training data which has been removed all the space between words. As we know, overlapping ambiguity strings can be found through FMM&BMM, and the true segmentation of such OASs can be found in training data. If an OAS string has unique segmentation, a rule was constructed. We called the rule set constructed in this step OAS-Ruleset.
3) In the testing data, if there is the same string as in CRFs-Ruleset or OAS-Ruleset, it will be segmented as according to the rule . For example, in the PKU testing data, through error-driven learning, we can segment the string " " as " " while this string is always segmented wrong as "
" segmented by CRFs model. In other words, error-driven learning can always can be seen as a consistency check. It assures the consistency of the segmentation of the training data and testing data when some strings such as " " occur in both.
New word detection
CRFs segmentation model can gives good performance on OOV words identification. But there are still some new words that have not been recognized. So an additive new words recognizer is adopted (Chen, 2003).
In-word probability of each character is used for new word detection. The in-word probability of a character is a probability that the character occurs as a part of a word of two or more characters. And the in-word probability of a character is trained from the training data and is calculated as follows: ( ) The consecutive single characters are combined into a new word if the in-word probability of each single character is over a threshold. Obviously, the value of the threshold is the key to the performance of this new words recognizer. Same as (Chen, 2003), we divided the training data as training data and developing data to find an exactly value of the threshold. For this bakeoff, we set the threshold of PKU data as 0.86 and that of MSR data as 0.88. Some new words such as " " " " " " were recognized by this recognizer.
Experimental results
Conclusion
Our open and closed GB track experiments show that its performance is competitive. The most important advantage of our system is the ability to cope with the unknown words.
Especially in the open track on the Microsoft research corpus, the recall on OOV of our system achieves 77.2%, higher than any other system. In future work, we would attempt to generalize the ideas of large-margin to CRFs model, leading to new optimal training algorithms with stronger guarantees against overfitting.
PKU-open, PKU-closed, MSR-open, MSR-closed. In the closed tracks, we used the dictionary with the words appearing in the training corpus and didn't conduct the process of named entity recognition. In the open tracks, we employed a dictionary of 134,458 entries. The size of training data used in the open tracks is same as the closed tracks. Except for a dictionary with more vocabulary, we have not employed any other special resources in the open tracks.Table 1shows the performance of our system in the bakeoff.Table 1: Official Bakeoff OutcomeIt's a pity that we make a careless mistake (a program bug) which led to 752 left quotation marks concatenated to the words following it in the closed and open tracks on Microsoft research corpus. With the problem fixed, the actual results of the official test data are better than any other system, as shown inTable 2. Actual evaluation on MSR corpusWe participated in the four GB tracks in the
second
international
Chinese
word
segmentation bakeoff: PKU
(open)
PKU
(closed)
MSR
(open)
MSR
(closed)
Precision 0.970
0.950
0.971
0.956
Recall
0.964
0.941
0.959
0.959
F
0.967
0.946
0.965
0.957
OOV
0.058
0.058
0.026
0.026
Recall
on OOV
0.864
0.813
0.785
0.496
MSR (open)
MSR (closed)
Precision
0.978
0.957
Recall
0.976
0.976
F
0.977
0.966
OOV
0.026
0.026
Recall on
OOV
0.772
0.387
Recall on
In-Voc
0.982
0.992
Table 2
Transformation Based Error Driven Learning and Natural Language Processing : A Case Study in Part of Speech Tagging. Eric Brill, Computational Linguistics. 4Eric Brill , 1995. Transformation Based Error Driven Learning and Natural Language Processing : A Case Study in Part of Speech Tagging , Computational Linguistics , V21. No. 4 ,
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F Pereira, Proc. ICML 01. ICML 01J. Lafferty, A. McCallum, and F. Pereira. 2001. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. ICML 01.
Chinese Word Segmentation Using Minimal Linguistic Knowledge. Aitao Chen, Proceedings of the Second SIGHAN Workshop on Chinese Language Processing. the Second SIGHAN Workshop on Chinese Language ProcessingAitao Chen. 2003. Chinese Word Segmentation Using Minimal Linguistic Knowledge. In Proceedings of the Second SIGHAN Workshop on Chinese Language Processing.
Chinese Part-of-Speech Taging: One-at-a-Time or All at Once? Word-based or Character based?. Hwee Ng, Jin Kiat Tou, Low, Proceedings of the Conference on Empirical Methods in Natural Language Processing. the Conference on Empirical Methods in Natural Language ProcessingSpainNg, Hwee Tou and Jin Kiat Low. 2004. Chinese Part-of-Speech Taging: One-at-a-Time or All at Once? Word-based or Character based? In Proceedings of the Conference on Empirical Methods in Natural Language Processing, Spain.
Chinese Segmentation and New Word Detection using Conditional Random Fields. Fuchun Peng, Fangfang Feng, Andrew Mccallum, Proceedings of the Twentith International Conference on Computaional Linguistics. the Twentith International Conference on Computaional LinguisticsPeng, Fuchun, Fangfang Feng, and Andrew McCallum. 2004. Chinese Segmentation and New Word Detection using Conditional Random Fields . In Proceedings of the Twentith International Conference on Computaional Linguistics, pages 562-568. |
250,390,430 | CLUZH at SIGMORPHON 2022 Shared Tasks on Morpheme Segmentation and Inflection Generation | This paper describes the submissions of the team of the Department of Computational Linguistics, University of Zurich, to the SIGMOR-PHON 2022 Shared Tasks on Morpheme Segmentation and Inflection Generation. Our submissions use a character-level neural transducer that operates over traditional edit actions. While this model has been found particularly well-suited for low-resource settings, using it with large data quantities has been difficult. Existing implementations could not fully profit from GPU acceleration and did not efficiently implement mini-batch training, which could be tricky for a transition-based system. For this year's submission, we have ported the neural transducer to PyTorch and implemented true mini-batch training. This has allowed us to successfully scale the approach to large data quantities and conduct extensive experimentation. We report competitive results for morpheme segmentation (including sharing first place in part 2 of the challenge). We also demonstrate that reducing sentence-level morpheme segmentation to a word-level problem is a simple yet effective strategy. Additionally, we report strong results in inflection generation (the overall best result for large training sets in part 1, the best results in low-resource learning trajectories in part 2). Our code is publicly available. | [
220045952,
236486209,
220285889,
218718982,
235253831,
6628106,
250391087,
102351730,
52144307,
249674559
] | CLUZH at SIGMORPHON 2022 Shared Tasks on Morpheme Segmentation and Inflection Generation
July 14, 2022
Silvan Wehrli silvan.wehrli@uzh.ch
Department of Computational Linguistics
University of Zurich
Switzerland
Simon Clematide simon.clematide@cl.uzh.ch
Department of Computational Linguistics
University of Zurich
Switzerland
Peter Makarov makarov@cl.uzh.ch
Department of Computational Linguistics
University of Zurich
Switzerland
CLUZH at SIGMORPHON 2022 Shared Tasks on Morpheme Segmentation and Inflection Generation
19th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology
July 14, 2022
This paper describes the submissions of the team of the Department of Computational Linguistics, University of Zurich, to the SIGMOR-PHON 2022 Shared Tasks on Morpheme Segmentation and Inflection Generation. Our submissions use a character-level neural transducer that operates over traditional edit actions. While this model has been found particularly well-suited for low-resource settings, using it with large data quantities has been difficult. Existing implementations could not fully profit from GPU acceleration and did not efficiently implement mini-batch training, which could be tricky for a transition-based system. For this year's submission, we have ported the neural transducer to PyTorch and implemented true mini-batch training. This has allowed us to successfully scale the approach to large data quantities and conduct extensive experimentation. We report competitive results for morpheme segmentation (including sharing first place in part 2 of the challenge). We also demonstrate that reducing sentence-level morpheme segmentation to a word-level problem is a simple yet effective strategy. Additionally, we report strong results in inflection generation (the overall best result for large training sets in part 1, the best results in low-resource learning trajectories in part 2). Our code is publicly available.
Introduction
This paper describes our submissions to the following SIGMORPHON 2022 shared tasks:
SEGM Morpheme Segmentation (Batsuren et al., 2022) 1. Typologically diverse morphological inflection (Kodner et al., 2022) 2. Morphological acquisition trajectories (Kodner and Khalifa, 2022) All our submissions rely on the same neural hardattention transducer architecture that has shown strong language-independent performance in a variety of character-level transduction tasks in morphology, grapheme-to-phoneme conversion, and text normalization Clematide, 2018, 2020a,b).
Morpheme Segmentation
The goal of this task is to design a system that splits words into morphemes (Table 1). Part 1 focuses on word-level morpheme segmentation (inputs are word types), part 2 on sentence-level morpheme segmentation (inputs are tokenized sentences). In part 1, there is a unique segmentation for every input word. This track provides very large datasets (in hundreds of thousands of training examples per language), allowing us to test the scalability of our system. In part 2, a word form may be segmented differently depending on the context. It offers an interesting setup to study, on the example of three languages (English, Czech, Mongolian), how important it is for a system to recognize and correctly handle this ambiguity. Our submission for part 2 tests this by using a word-level model (developed for part 1), optionally with part-of-speech (POS) tags as side input.
Inflection Generation
The SIGMORPHON-UniMorph 2022 shared task on typologically diverse and acquisition-inspired morphological inflection generation asks to predict an inflected word form given its lemma and a set of morphosyntactic features specified according to the UniMorph standard (Table 1). Part 1 consists of 32 languages with small training sets (mostly 700 items, but for 4 languages only 70 to 240 items) and 21 large training sets (exactly 7,000 items). Part 2 has an ablation-style setup for Arabic, English, and German: For each language, there is a dataset for each increment of 100, ranging from 100 to 600 (German) or 1,000 training samples (Arabic, English). The development set feature specifications are representative of the test set. Both tasks target the generalization capabilities of morphology learning systems by examining separately their test set performance on seen and unseen lemmas and feature specifications.
Model Description
As a basis for all our submissions, we use a neural character-level transducer that edits the input string into the output string by a sequence of traditional edit actions: substitutions, insertions, deletion, and copy. The specific version of this approach was developed for grapheme-to-phoneme conversion (Makarov and Clematide, 2020a). Such neural transducers have typically performed well in morphological and related character-level transduction tasks in low to medium training data settings. Although they can be competitive in large-data regimes (Makarov and Clematide, 2018), their successful application to large data settings with appropriately large parameter sizes (cf. the Transformerbased models of Wu et al. (2021) have over 7M parameters) may also be limited by a specific implementation. In this year's submission, we scale the approach to large datasets by porting it to a different framework and making algorithmic improvements to training.
True mini-batch training. The training procedure for transition-based systems could be difficult to batch (Noji and Oseki, 2021;Ding and Koehn, 2019), which is why many systems are trained by gradient accumulation over individual samples (and possibly relying on library optimizations such as DyNet Autobatch (Neubig et al., 2017b)). This results in slow training for large data sets. In our im- plementation of true mini-batch training, we start by precomputing gold action sequences using an oracle character aligner. By doing so, alignments and gold actions for all decoding steps of all training samples are known a priori (as opposed to being computed on the fly, which would be useful when parameter updates are interleaved with sampling from the model distribution). This permits calling the unrolled version of the decoder. The resulting procedure dramatically speeds up training compared to gradient accumulation. Furthermore, our implementation supports batched greedy decoding. Table 2 gives an impression of these performance improvements: For a batch size of 32, training is around 3 times faster on a CPU and close to 100 times faster on a GPU. For a batch size of 512, training is faster by a factor of over 250 on a GPU. Additionally, the time needed for greedy decoding can be efficiently decreased on a GPU. 3
Further model details. The latest implementation only uses teacher forcing. Specifically, it does not yet incorporate roll-ins, i.e. the model does not see its own predictions during training, which would improve generalizability by countering exposure bias (Pomerleau, 1989). We also add support for features. Features are treated as atomic. For INFL, the features associated with an inflection input-output pair are passed through an embedding layer and then summed. For further details on the system and the oracle character aligner, we refer the reader to Makarov and Clematide (2020a).
Submission Details
For both tasks, we train separate models for each language and use the development set exclusively for model selection.
Morpheme Segmentation
Data preprocessing. Besides NFD normalization as a preprocessing step, we substitute the multicharacter morpheme delimiter (" @@") by a single character unseen in the data to decrease the length of the output.
Sentence-level segmentation. We simplify part 2 of the SEGM task by reducing it to a word-level problem. Concretely, we split the input sentences into single word tokens and train the model on these word tokens, similarly to part 1. The single word predictions are then simply concatenated to form the original sentence. Since this completely neglects the context of the words, we have also experimented with POS tags as additional input features (Table 3). We use TreeTagger (Schmid, 1999) to obtain the features. 4 We also experimented with transducing entire sentences in one go, however this led to a substantial drop in accuracy. coder dropout. We found the Adam optimizer (Kingma and Ba, 2015) to work well, as well as the scheduler that reduces the learning rate whenever a development set metric plateaus. We settled on a batch size of 32 for all models, which offers a good trade-off between model performance and training speed.
Encoders. We use a 2-layer stacked LSTM as the encoder and experimented with encoder dropout. We also experimented extensively with a Transformer encoder (Vaswani et al., 2017). Despite considerable effort, we failed to make it work at the performance level of stacked LSTMs. Other hyperparameters (e.g. various embedding dimensions) are similar to the previous work (Makarov and Clematide, 2020a).
Decoding. For efficiency, we compute all the model outputs using mini-batch greedy decoding.
Ensembling. All our submissions are majorityvote ensembles. For part 1, we submit a 5-strong ensemble, CLUZH, composed of 3 models without encoder dropout and 2 models with encoder dropout of 0.1. 5 For part 2, we submit three ensembles. All individual models have an encoder dropout probability of 0.25 and vary only in their use of features: CLUZH-1 with 3 models without POS features, CLUZH-2 with 3 models with POS tag features, and CLUZH-3 with combines all the models from CLUZH-1 and CLUZH-2.
Inflection Generation
Data preprocessing. For both parts, we apply NFD normalization to the input and split the Uni-Morph features at ";" by default. For languages that showed lower performance compared to the neural or non-neural baseline on the development set in part 1, we also computed models without NFD normalization and chose the best based on their development set performance. For Korean, we observed some Latin transliteration noise in the train/development set targets, which we removed before training. For Lamaholot (slp), we observed a very low accuracy (5%) on the development set compared to the neural baseline's 20% performance. By splitting UniMorph features at "+" as well as ";", 6 we achieved better generalization for this low-resource language (only 240 training examples available).
Hyper-parameters. For small datasets in both parts: batch size 1, a patience of 30 epochs, onelayer encoder and decoder with hidden size 200, character and action embeddings of size 100, feature embeddings of size 50, the AdamW optimizer (Loshchilov and Hutter, 2019) with a learning rate of 0.0005 (half of the default value), the reducelearning-rate-on-plateau scheduler with factor 0.75, and beam decoding with beam width 4. For a few languages whose development set performance was lower than that of the baselines, we computed models without NFD normalization and used those in case of improved accuracy. 7 For large datasets in part 1, we made the following changes from the above: batch size 32, a patience of 20 epochs, action embeddings of size 200, a two-layer encoder with a hidden size of 1,000, a one-layer decoder with a hidden size of 2,000. In case of the development set performance was below that of any of the official baselines, we used some alternative hyper-parameters: 8 no NFD normalization, batch size 16, a one-layer encoder with a hidden size of 2,000, a one-layer decoder with a hidden size of 4,000, and the Adadelta optimizer (Zeiler, 2012) with the default learning rate. Hyperparameters were not chosen using a systematic grid search or experimentation.
Convergence. For the small datasets in part 1 with default hyper-parameters and NFD normalization, we observe large differences in the number of epochs to convergence (mean 27.3, SD 22.8). For some languages, e.g. Chukchi (ckt), Ket (ket), and Ludian (lud), we see the best results on the first epoch, which typically means the model has just learned to copy the input to the output. For other languages, much larger or highly varying numbers of epochs to convergence are observed: Slovak (15-93), Karelian (13-88), Mongolian, Khalkha (19-61), and Korean (12-143).
For the large datasets in part 1 (7,000 training examples) with default hyper-parameters and NFD normalization, we observe a mean of 17.3 epochs to convergence (SD 16.0). For Ludian, even in the large setting, the first epoch with copying gave the best results. In contrast, Georgian could generally profit from more epochs (mean 36.8, SD 17.9).
Ensembling. Our submission for part 1 is a 5strong majority-voting ensemble, and it is a 10strong ensemble for part 2. Table 4 and Table 5 show our results for parts 1 and 2, respectively. Based on the macro-average F1 score over all languages, our submission for part 1 ranks third out of 7 full submissions. For part 2, our submission CLUZH-3 was declared the winner out of 10 full submissions. 9
Results and Discussion
Morpheme Segmentation
Dropout. The results for part 1 suggest that encoder dropout can help improve model performance. For some languages, the performance can improve by as much as 1% F1 score absolute.
Ensembling. Ensembling brings a clear improvement over single-best results. On average, the improvement is +0.55% on the development set and +0.53% on the test set (compared to the best single model result). The improvement on the English dataset is substantial: +1.64% and +1.84% on the development and test sets, respectively.
Gains from POS tags. The results for part 2 suggest that treating a sentence-level problem as word-level may be a simple yet powerful strategy for morpheme segmentation. The success of this strategy depends on the language and the data. The more segmentation ambiguity a language has, the more important the context is. Mongolian has the highest segmentation ambiguity (Table 6). Around 1/5 of the tokens in the training data have at least two possible segmentations, whereas Czech and English exhibit little to no ambiguity. This may partially explain why the performance on the Mongolian data is the lowest. This also explains why using POS tags as additional features bring the biggest improvement for Mongolian: +0.29% and +0.27% on the development and test sets, based on the average of individual models. Using POS tags improves the prediction of ambiguous segmentation by an absolute 1.1% and 0.6% on the development and dropout = 0.0 (avg. of 3 models) dropout = 0 test sets for Mongolian (Table 7). When looking at the whole dataset, using POS features increases the relative number of correct predictions by 0.11% (development set) and 0.06% (test set) compared to not using the features. Using POS tags brings slight improvements and helps mitigate the loss of context.
PyTorch reimplementation. This year's system is a close reimplementation in PyTorch (Paszke et al., 2019) of our earlier CPU codebase using DyNet (Neubig et al., 2017a). It fully supports GPU utilization, allowing for efficient processing of large amounts of training data. Our code is publicly available. English dataset. This makes the learning problem much harder, which is further exacerbated by the relatively small size of the data (compared to English).
Inflection Generation
The part 1 test set results are shown in Table 9. Given the large number of languages, we discuss the average accuracy on small and large training sets. An important goal for this shared task was to assess a system's performance on test data subsets defined by whether both the lemma and the feature specification were seen in the training data (+L +F in the Table), whether only the lemma (+L, -F), or only the feature specification (-L, +F) were seen, or whether neither of them (-L -F) appeared in the training data.
Small datasets. On the small datasets, our system only excels on the -L +F subset, meaning it is strong in modeling the behaviour of features. In the small dataset setting, the best competitor system, UBC, has an extremely strong performance in case the lemma is known (+L). It would be interesting to know what kind of information or data augmentation UBC uses: The neural baseline, which utilizes data augmentation, has a much lower performance (24.9%) than our submission. Overall, our submission with a 5-strong ensemble achieves the second-best result of the submissions covering all languages.
Large datasets. In the large dataset setting, our submission shows the best performance overall. On the subset with seen lemmas and unseen features (+L -F), the neural baseline is the only system with slightly better results. This indicates that our system's modeling of lemmas is not yet optimal. The information flow in our architecture maybe dominated by the features (they are fed into the decoder at every action prediction step) and the aligned input character, and it may not have the best representation of the input lemma as a whole.
Trajectories. The test set results for part 2 are shown in Figure 1. Our 10-strong ensemble was the clear overall winner in this low-resource track. It beats the best competing approaches by a substantial margin on the per-language average: Arabic 59.6% accuracy (best competitor OSU 57.5%), German 76.7% (non-neural baseline 74.8%), English 85.7% (OSU 81.5%). Individual model performance varies, and the majority-vote ensembling improved the scores by 1.4% absolute on average on the test set. Interestingly, the difference between the average model performance and the ensemble performance does not get smaller with larger training sets.
The correlation between the increasing number of training examples and the improving test set performance is almost perfect for the average performance. Ensembles are slightly less stable.
Conclusion
This paper presents the submissions of the Department of Computational Linguistics, University of Zurich, to the SIGMOPRHON 2022 morpheme segmentation and inflection generation shared tasks. We build on the previous architecture, the neural transducer over edit actions, porting it to a new deep learning framework and implementing GPUoptimized mini-batch training. This permits scaling the system to large training datasets, as demonstrated by strong performance in both shared tasks.
We show that reducing sentence-level morpheme segmentation to a word-level problem is a viable strategy. Conditioning on POS tags brings further improvements. We leave it to future work to explore more powerful representations of context. We experimented with a Transformer-based encoder for morpheme segmentation, and while the initial results were not satisfactory, we intent to pursue this further. In inflection generation, we note problems with capturing unseen lemmas, despite otherwise strong performance across data regimes.
INFL Typologically Diverse and Acquisition-Inspired Morphological Inflection Generation: 2 1 https://github.com/sigmorphon/2022SegmentationST 2 https://github.com/sigmorphon/2022InflectionST: 1
1. Word-level morpheme segmentation
2. Sentence-level morpheme segmentation
Task
Input
Output
SEGM hierarchisms hierarch @@y @@ism @@s
INFL
sue V;PST
sued
Table 1: Examples of morpheme segmentation (SEGM)
and inflection generation (INFL). SEGM involves pre-
dicting canonical forms of morphemes. The inputs for
INFL consist of lemmas and UniMorph feature specifi-
cations.
Table 2 :
2Mini-batch training and greedy decoding speed
for this year's implementation (CLUZH) vs the base-
line (BL) of Makarov and Clematide (2020a) on the
Armenian dataset of the SIGMOPRHON 2021 shared
task on grapheme-to-phoneme conversion (Ashby et al.,
2021). The BL models are trained on CPU using gradi-
ent accumulation (GA). All numbers are given in sec-
onds and per 1,000 samples. The training times are
averages of 20 epochs on the training set. The greedy
decoding times are averages of 20 runs on the develop-
ment set using a well-trained model. The CLUZH model
hyper-parameters are identical to those of Makarov and
Clematide (2020a).
Hyper-parameter search. For both parts, we have evaluated extensively various choices of optimizers, learning rate schedulers, batch size, en-Гэрт
эмээ хоол
хийв
.
Гэр @@т эмээ хоол
хийх @@в
.
NN
NN
VB
VB
.
Grandmother cooked at home.
Би
өдөр эмээ
уусан
.
Би
өдөр эм @@ээ уух @@сан .
PR
NN
VB
VB
.
Today I took my medicine.
Table 3 :
3SEGM part 2 with POS features for Mongolian. The features inferred from the context using TreeTagger could help disambiguate the word form in bold.
Table 4 :
4F1 scores for SEGM part 1.without features
with POS tags
combined
average
(3 models)
ensemble
(3 models)
average
(3 models)
ensemble
(3 models)
ensemble
(6 models)
best
other
Language
dev
test
dev
test
dev
test
dev
test
dev
test test
Czech
94.06 90.90 94.54 91.35 94.15 91.15 94.45 91.76 94.72 91.99 91.76
English
98.12 89.27 98.31 89.47 98.18 89.29 98.38 89.47 98.41 89.54 96.31
Mongolian 85.95 81.57 87.06 82.22 86.24 81.84 87.26 82.55 87.62 82.88 82.59
AVG
92.71 87.25 93.30 87.68 92.86 87.43 93.36 87.93 93.58 88.14 90.22
Table 5 :
5F1 scores for SEGM part 2. All models are trained with a dropout probability of 0.25.train
dev
Language
1
≥2
1
≥2
Czech
100%
0%
100%
0%
English
99.58%
0.42% 99.75%
0.25%
Mongolian 77.91% 22.09% 90.00% 10.00%
Table 6 :
6Segmentation ambiguity in SEGM part 2: Relative frequency of unambiguous (1) vs ambiguous (≥ 2) word tokens.
10 10 https://github.com/slvnwhrl/il-reimplementationdev
test
ambiguous
all
ambiguous
all
NF
POS
∆
NF
POS
∆
63.0% 64.1% +0.11% 59.5% 60.1% +0.06%
Table 7 :
7Impact of POS features on Mongolian, SEGM part 2. ambiguous shows the average percentage of correctly predicted ambiguous segmentations for Mongolian. NF denotes models without features, POS denotes models using POS tags. all shows the absolute improvement for POS compared to NF, in relation to the whole dataset.Token-type ratio. Another reason for the lower performance of Mongolian might lie in the high variance in the data: The Mongolian training dataset contains around 40% unique tokens (Table 8). This is around 4 times more than in thetrain
dev
Language
total unique
total unique
Czech
15,157
5,126
7,545
3,217
English
169,117 17,249 21,444
4,849
Mongolian
13,237
5,293
6,632
3,216
Table 8 :
8Word counts in SEGM part 2: The total number
of word forms and the number of unique words.
Table 9 :
9Test results (accuracy macro-averaged over
languages) for INFL part 1 split by training dataset
size: large (7,000 training examples) vs small (up to
700 examples). ∆ shows the difference between our
submission and the best competitor covering the full set
of languages.
Note that the precomputation of gold action sequences for the training data takes around 12 seconds per 1000 samples. However, this procedure is only required once per dataset as the precomputed output can be reused for any training run. In any case, the gains shown inTable 2easily offset the additionally required time.
The parameter files are available at https://www.cis.unimuenchen.de/˜schmid/tools/TreeTagger/.
Due to a mistake, the predictions by the models with dropout 0.1 were included twice, and a prepared model with dropout 0.25 was not used at all. However, the F1 macroaverage over all the languages for the intended ensemble on the development set is only 0.08 points higher.
For instance, V;ARGAC2P+ARGNO2P;SBJV would be split into 4 separate features. 7 Arabic, Gothic, Hungarian, and Old Norse. 8 Arabic, Assamese, Evenki, Hungarian, Kazakh, Mongolian, Khalkha, and Old Norse.
Our submission performed the best on two out of three languages (Czech and Mongolian). As it was beaten by another submission based on the macro F1 average, two submissions were declared winners.
Results of the Second SIGMORPHON Shared Task on Multilingual Grapheme-to-Phoneme Conversion. F E Lucas, Travis M Ashby, Simon Bartley, Luca Clematide, Cameron Del Signore, Kyle Gibson, Yeonju Gorman, Peter Lee-Sikka, Aidan Makarov, Sean Malanoski, Omar Miller, Reuben Ortiz, Arundhati Raff, Bora Sengupta, Yulia Seo, Winnie Spektor, Yan, Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and MorphologyLucas F.E. Ashby, Travis M. Bartley, Simon Clematide, Luca Del Signore, Cameron Gibson, Kyle Gorman, Yeonju Lee-Sikka, Peter Makarov, Aidan Malanoski, Sean Miller, Omar Ortiz, Reuben Raff, Arundhati Sengupta, Bora Seo, Yulia Spektor, and Winnie Yan. 2021. Results of the Second SIGMORPHON Shared Task on Multilingual Grapheme-to-Phoneme Con- version. In Proceedings of the 18th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology.
Ryan Cotterell, and Ekaterina Vylomova. 2022. The sigmorphon 2022 shared task on morpheme segmentation. Khuyagbaatar Batsuren, Gábor Bella, Aryaman Arora, Viktor Martinović, Kyle Gorman, Zdeněk Žabokrtský, Amarsanaa Ganbold, Šárka Dohnalová, Magda Ševčíková, Kateřina Pelegrinová, Fausto Giunchiglia, 10.18653/v1/W19-150119th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. Khuyagbaatar Batsuren, Gábor Bella, Aryaman Arora, Viktor Martinović, Kyle Gorman, Zdeněk Žabokrt- ský, Amarsanaa Ganbold, Šárka Dohnalová, Magda Ševčíková, Kateřina Pelegrinová, Fausto Giunchiglia, Ryan Cotterell, and Ekaterina Vylomova. 2022. The sigmorphon 2022 shared task on morpheme segmen- tation. In 19th SIGMORPHON Workshop on Com- putational Research in Phonetics, Phonology, and Morphology.
Parallelizable stack long short-term memory. Shuoyang Ding, Philipp Koehn, Proceedings of the Third Workshop on Structured Prediction for NLP. the Third Workshop on Structured Prediction for NLPShuoyang Ding and Philipp Koehn. 2019. Parallelizable stack long short-term memory. In Proceedings of the Third Workshop on Structured Prediction for NLP.
Adam: A Method for Stochastic Optimization. P Diederik, Jimmy Kingma, Ba, 3rd International Conference on Learning Representations. San Diego, CA, USAConference Track ProceedingsDiederik P. Kingma and Jimmy Ba. 2015. Adam: A Method for Stochastic Optimization. In 3rd Inter- national Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
SIGMORPHON-UniMorph 2022 Shared Task 0: Modeling Inflection in Language Acquisition. Jordan Kodner, Salam Khalifa, Proceedings of the SIGMORPHON 2022 Shared Task: Morphological Inflection. the SIGMORPHON 2022 Shared Task: Morphological InflectionJordan Kodner and Salam Khalifa. 2022. SIGMORPHON-UniMorph 2022 Shared Task 0: Modeling Inflection in Language Acquisition. In Proceedings of the SIGMORPHON 2022 Shared Task: Morphological Inflection.
Alexandra Serova, Anastasia Yemelina, Jeremiah Young, and Ekaterina Vylomova. 2022. SIGMORPHON-UniMorph 2022 Shared Task 0: Generalization and Typologically Diverse Morphological Inflection. Jordan Kodner, Salam Khalifa, Khuyagbaatar Batsuren, Hossep Dolatian, Ryan Cotterell, Faruk Akkuş, Antonios Anastasopoulos, Taras Andrushko, Aryaman Arora, Nona Atanelov, Gábor Bella, Elena Budianskaya, Yustinus Ghanggo Ate, Omer Goldman, Simon Guriel, Silvia Guriel-Agiashvili, Jan Hajič, Jan Hric, Ritvan Karahodja, Witold Kieraś, Andrew Krizhanovsky, Natalia Krizhanovsky, Igor Marchenko, Magdalena Markowska, Polina Mashkovtseva, Maria Nepomniashchaya, Daria Rodionova, Elizabeth Salesky, Karina Sheifer, 10.18653/v1/D18-1314Proceedings of the SIGMORPHON 2022 Shared Task: Morphological Inflection. the SIGMORPHON 2022 Shared Task: Morphological InflectionJordan Kodner, Salam Khalifa, Khuyagbaatar Bat- suren, Hossep Dolatian, Ryan Cotterell, Faruk Akkuş, Antonios Anastasopoulos, Taras Andrushko, Aryaman Arora, Nona Atanelov, Gábor Bella, Elena Budianskaya, Yustinus Ghanggo Ate, Omer Goldman, Simon Guriel, Silvia Guriel-Agiashvili, Jan Hajič, Jan Hric, Ritvan Karahodja, Witold Kieraś, Andrew Krizhanovsky, Natalia Krizhanovsky, Igor Marchenko, Magdalena Markowska, Polina Mashkovtseva, Maria Nepomniashchaya, Daria Ro- dionova, Elizabeth Salesky, Karina Sheifer, Alexan- dra Serova, Anastasia Yemelina, Jeremiah Young, and Ekaterina Vylomova. 2022. SIGMORPHON- UniMorph 2022 Shared Task 0: Generalization and Typologically Diverse Morphological Inflection. In Proceedings of the SIGMORPHON 2022 Shared Task: Morphological Inflection.
Decoupled weight decay regularization. Ilya Loshchilov, Frank Hutter, 7th International Conference on Learning Representations, ICLR 2019. New Orleans, LA, USAIlya Loshchilov and Frank Hutter. 2019. Decoupled weight decay regularization. In 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, May 6-9, 2019.
Imitation learning for neural morphological string transduction. Peter Makarov, Simon Clematide, 10.18653/v1/2020.sigmorphon-1.19Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingPeter Makarov and Simon Clematide. 2018. Imitation learning for neural morphological string transduction. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing.
CLUZH at SIGMORPHON 2020 Shared Task on Multilingual Grapheme-to-Phoneme Conversion. Peter Makarov, Simon Clematide, 10.18653/v1/2020.acl-main.650Proceedings of the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and Morphology. the 17th SIGMORPHON Workshop on Computational Research in Phonetics, Phonology, and MorphologyPeter Makarov and Simon Clematide. 2020a. CLUZH at SIGMORPHON 2020 Shared Task on Multilingual Grapheme-to-Phoneme Conversion. In Proceedings of the 17th SIGMORPHON Workshop on Computa- tional Research in Phonetics, Phonology, and Mor- phology.
Semisupervised contextual historical text normalization. Peter Makarov, Simon Clematide, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsPeter Makarov and Simon Clematide. 2020b. Semi- supervised contextual historical text normalization. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics.
Graham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopoulos, Miguel Ballesteros, David Chiang, Daniel Clothiaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chaitanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, arXiv:1701.03980Swabha Swayamdipta, and Pengcheng Yin. 2017a. Dynet: The dynamic neural network toolkit. arXiv preprintGraham Neubig, Chris Dyer, Yoav Goldberg, Austin Matthews, Waleed Ammar, Antonios Anastasopou- los, Miguel Ballesteros, David Chiang, Daniel Cloth- iaux, Trevor Cohn, Kevin Duh, Manaal Faruqui, Cynthia Gan, Dan Garrette, Yangfeng Ji, Lingpeng Kong, Adhiguna Kuncoro, Gaurav Kumar, Chai- tanya Malaviya, Paul Michel, Yusuke Oda, Matthew Richardson, Naomi Saphra, Swabha Swayamdipta, and Pengcheng Yin. 2017a. Dynet: The dy- namic neural network toolkit. arXiv preprint arXiv:1701.03980.
On-the-fly Operation Batching in Dynamic Computation Graphs. Graham Neubig, Yoav Goldberg, Chris Dyer, Advances in Neural Information Processing Systems. 30Graham Neubig, Yoav Goldberg, and Chris Dyer. 2017b. On-the-fly Operation Batching in Dynamic Compu- tation Graphs. In Advances in Neural Information Processing Systems, volume 30.
Effective batching for recurrent neural network grammars. Hiroshi Noji, Yohei Oseki, Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Hiroshi Noji and Yohei Oseki. 2021. Effective batch- ing for recurrent neural network grammars. In Find- ings of the Association for Computational Linguistics: ACL-IJCNLP 2021.
Pytorch: An imperative style, high-performance deep learning library. Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary Devito, 10.1007/978-94-017-2390-9_2Advances in Neural Information Processing Systems. Junjie Bai, and Soumith ChintalaMartin Raison, Alykhan Tejani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang32Adam Paszke, Sam Gross, Francisco Massa, Adam Lerer, James Bradbury, Gregory Chanan, Trevor Killeen, Zeming Lin, Natalia Gimelshein, Luca Antiga, Alban Desmaison, Andreas Kopf, Edward Yang, Zachary DeVito, Martin Raison, Alykhan Te- jani, Sasank Chilamkurthy, Benoit Steiner, Lu Fang, Junjie Bai, and Soumith Chintala. 2019. Pytorch: An imperative style, high-performance deep learning li- brary. In Advances in Neural Information Processing Systems 32.
Alvinn: An autonomous land vehicle in a neural network. A Dean, Pomerleau, Proceedings of the Conference on Neural Information Processing Systems. the Conference on Neural Information Processing SystemsDean A Pomerleau. 1989. Alvinn: An autonomous land vehicle in a neural network. In Proceedings of the Conference on Neural Information Processing Systems.
Improvements in Part-of-Speech Tagging with an Application to German. H Schmid, Natural Language Processing Using Very Large Corpora, Text, Speech and Language Technology. H. Schmid. 1999. Improvements in Part-of-Speech Tag- ging with an Application to German. In Natural Lan- guage Processing Using Very Large Corpora, Text, Speech and Language Technology.
Attention is All you Need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 10.18653/v1/2021.eacl-main.163Advances in Neural Information Processing Systems. 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All you Need. In Advances in Neural Information Pro- cessing Systems, volume 30.
Applying the transformer to character-level transduction. Shijie Wu, Ryan Cotterell, Mans Hulden, Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeShijie Wu, Ryan Cotterell, and Mans Hulden. 2021. Ap- plying the transformer to character-level transduction. In Proceedings of the 16th Conference of the Euro- pean Chapter of the Association for Computational Linguistics: Main Volume.
ADADELTA: an adaptive learning rate method. D Matthew, Zeiler, arXiv:1212.5701Matthew D Zeiler. 2012. ADADELTA: an adaptive learning rate method. arXiv:1212.5701. |
129,092 | A Systematic Study of Semantic Vector Space Model Parameters | We present a systematic study of parameters used in the construction of semantic vector space models. Evaluation is carried out on a variety of similarity tasks, including a compositionality dataset, using several source corpora. In addition to recommendations for optimal parameters, we present some novel findings, including a similarity metric that outperforms the alternatives on all tasks considered. | [
7919491,
7747235,
11567084,
8712237,
8927694,
8701528
] | A Systematic Study of Semantic Vector Space Model Parameters
Association for Computational LinguisticsCopyright Association for Computational LinguisticsApril 26-30 2014. 2014
Douwe Kiela douwe.kiela@cl.cam.ac.uk
Computer Laboratory
University of Cambridge Computer Laboratory
University of Cambridge
Stephen Clark
Computer Laboratory
University of Cambridge Computer Laboratory
University of Cambridge
A Systematic Study of Semantic Vector Space Model Parameters
Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC) @ EACL 2014
the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC) @ EACL 2014Gothenburg, SwedenAssociation for Computational LinguisticsApril 26-30 2014. 2014
We present a systematic study of parameters used in the construction of semantic vector space models. Evaluation is carried out on a variety of similarity tasks, including a compositionality dataset, using several source corpora. In addition to recommendations for optimal parameters, we present some novel findings, including a similarity metric that outperforms the alternatives on all tasks considered.
Introduction
Vector space models (VSMs) represent the meanings of lexical items as vectors in a "semantic space". The benefit of VSMs is that they can easily be manipulated using linear algebra, allowing a degree of similarity between vectors to be computed. They rely on the distributional hypothesis (Harris, 1954): the idea that "words that occur in similar contexts tend to have similar meanings" (Turney and Pantel, 2010;Erk, 2012). The construction of a suitable VSM for a particular task is highly parameterised, and there appears to be little consensus over which parameter settings to use.
This paper presents a systematic study of the following parameters:
• vector size; • window size; • window-based or dependency-based context; • feature granularity; • similarity metric; • weighting scheme; • stopwords and high frequency cut-off.
A representative set of semantic similarity datasets has been selected from the literature, including a phrasal similarity dataset for evaluating compositionality. The choice of source corpus is likely to influence the quality of the VSM, and so we use a selection of source corpora. Hence there are two additional "superparameters":
• dataset for evaluation; • source corpus.
Previous studies have been limited to investigating only a small number of parameters, and using a limited set of source corpora and tasks for evaluation (Curran and Moens, 2002a;Curran and Moens, 2002b;Curran, 2004;Grefenstette, 1994;Pado and Lapata, 2007;Sahlgren, 2006;Turney and Pantel, 2010;Schulte im Walde et al., 2013). Rohde et al. (2006) considered several weighting schemes for a large variety of tasks, while Weeds et al. (2004) did the same for similarity metrics. Stone et al. (2008) investigated the effectiveness of sub-spacing corpora, where a larger corpus is queried in order to construct a smaller sub-spaced corpus (Zelikovitz and Kogan, 2006). Blacoe and Lapata (2012) compare several types of vector representations for semantic composition tasks. The most comprehensive existing studies of VSM parameters -encompassing window sizes, feature granularity, stopwords and dimensionality reduction -are by Bullinaria and Levy (2007;2012) and Lapesa and Evert (2013).
Section 2 introduces the various parameters of vector space model construction. We then attempt, in Section 3, to answer some of the fundamental questions for building VSMs through a number of experiments that consider each of the selected parameters. In Section 4 we examine how these findings relate to the recent development of distributional compositional semantics (Baroni et al., 2013;Clark, 2014), where vectors for words are combined into vectors for phrases.
Data and Parameters
Two datasets have dominated the literature with respect to VSM parameters: WordSim353 (Finkelstein et al., 2002) (Bruni et al., 2012). All these datasets consist of human similarity ratings for word pairings, except TOEFL, which consists of multiple choice questions where the task is to select the correct synonym for a target word. In Section 4 we examine our parameters in the context of distributional compositional semantics, using the evaluation dataset from Mitchell and Lapata (2010). Table 1 gives statistics for the number of words and word pairings in each of the datasets. As well as using a variety of datasets, we also consider three different corpora from which to build the vectors, varying in size and domain. These include the BNC (Burnard, 2007) (10 6 word types, 10 8 tokens) and the larger ukWaC (Baroni et al., 2009) (10 7 types, 10 9 tokens). We also include a sub-spaced Wikipedia corpus (Stone et al., 2008): for all words in the evaluation datasets, we build a subcorpus by querying the top 10-ranked Wikipedia documents using the words as search terms, resulting in a corpus with 10 6 word types and 10 7 tokens. For examining the dependency-based contexts, we include the Google Syntactic N-gram corpus (Goldberg and Orwant, 2013), with 10 7 types and 10 11 tokens.
Parameters
We selected the following set of parameters for investigation, all of which are fundamental to vector space model construction 1 .
Vector size Each component of a vector represents a context (or perhaps more accurately a "contextual element", such as second word to the left of the target word). 2 The number of components varies hugely in the literature, but a typical value is in the low thousands. Here we consider vector sizes ranging from 50,000 to 500,000, to see whether larger vectors lead to better performance.
Context There are two main approaches to modelling context: window-based and dependencybased. For window-based methods, contexts are determined by word co-occurrences within a window of a given size, where the window simply spans a number of words occurring around instances of a target word. For dependency-based methods, the contexts are determined by word co-occurrences in a particular syntactic relation with a target word (e.g. target word dog is the subject of run, where run subj is the context). We consider different window sizes and compare window-based and dependency-based methods.
Feature granularity Context words, or "features", are often stemmed or lemmatised. We investigate the effect of stemming and lemmatisation, in particular to see whether the effect varies with corpus size. We also consider more finegrained features in which each context word is paired with a POS tag or a lexical category from CCG (Steedman, 2000).
Similarity metric A variety of metrics can be used to calculate the similarity between two vectors. We consider the similarity metrics in Table 2.
Weighting Weighting schemes increase the importance of contexts that are more indicative of the meaning of the target word: the fact that cat cooccurs with purr is much more informative than its co-occurrence with the. Table 3 gives definitions of the weighting schemes considered.
Stopwords, high frequency cut-off Function words and stopwords are often considered too uninformative to be suitable context words. Ignoring them not only leads to a reduction in model size and computational effort, but also to a more informative distributional vector. Hence we followed standard practice and did not use stopwords as context words (using the stoplist in NLTK (Bird et al., 2009)). The question we investigated is
Measure Definition Euclidean 1 1+ √ n i=1 (u i −v i ) 2 Cityblock 1 1+ n i=1 |u i −v i | Chebyshev 1 1+max i |u i −v i | Cosine u·v |u||v| Correlation (u−µu)·(v−µv ) |u||v| Dice 2 n i=0 min(u i ,v i ) n i=0 u i +v i Jaccard u·v n i=0 u i +v i Jaccard2 n i=0 min(u i ,v i ) n i=0 max(u i ,v i ) Lin n i=0 u i +v i |u|+|v| Tanimoto u·v |u|+|v|−u·v Jensen-Shannon Div 1 − 1 2 (D(u|| u+v 2 )+D(v|| u+v 2 )) √ 2 log 2 α-skew 1 − D(u||αv+(1−α)u) √ 2 log 2
Experiments
The parameter space is too large to analyse exhaustively, and so we adopted a strategy for how to navigate through it, selecting certain parameters to investigate first, which then get fixed or "clamped" in the remaining experiments. Unless specified otherwise, vectors are generated with the following restrictions and transformations on features: stopwords are removed, numbers mapped to 'NUM', and only strings consisting of alphanumeric characters are allowed. In all experiments, the features consist of the frequency-ranked first n words in the given source corpus. Four of the five similarity datasets (RG, MC, W353, MEN) contain continuous scales of similarity ratings for word pairs; hence we follow standard practice in using a Spearman correlation coefficient ρ s for evaluation. The fifth dataset (TOEFL) is a set of multiple-choice questions, for which an accuracy measure is appropriate. Calculating an aggregate score over all datasets is non-trivial, since taking the mean of correlation scores leads to an under-estimation of performance; hence for the aggregate score we use the Fisher-transformed z-variable of the correla-
Scheme Definition None wij = fij TF-IDF wij = log(fij) × log( N n j ) TF-ICF wij = log(fij) × log( N f j ) Okapi BM25 wij = f ij 0.5+1.5× f j f j j +f ij log N −n j +0.5 f ij +0.5 ATC wij = (0.5+0.5× f ij max f ) log( N n j ) N i=1 [(0.5+0.5× f ij max f ) log( N n j )] 2 LTU w ij = (log(f ij )+1.0) log( N n j ) 0.8+0.2×f j × j f j MI wij = log P (t ij |c j ) P (t ij )P (c j ) PosMI max(0, MI) T-Test wij = P (t ij |c j )−P (t ij )P (c j ) √ P (t ij )P (c j )
χ 2 see (Curran, 2004, p. 83) Lin98a Table 3: Term weighting schemes. f ij denotes the target word frequency in a particular context, f i the total target word frequency, f j the total context frequency, N the total of all frequencies, n j the number of non-zero contexts. P (t ij |c j ) is defined as
wij = f ij ×f f i ×f j Lin98b wij = −1 × log n j N Gref94 wij = log f ij +1 log n j +1f ij f j and P (t ij ) as f ij N .
tion datasets, and take the weighted average of its inverse over the correlation datasets and the TOEFL accuracy score (Silver and Dunlap, 1987).
Vector size
The first parameter we investigate is vector size, measured by the number of features. Vectors are constructed from the BNC using a window-based method, with a window size of 5 (2 words either side of the target word). We experiment with vector sizes up to 0.5M features, which is close to the total number of context words present in the entire BNC according to our preprocessing scheme.
Features are added according to frequency in the BNC, with increasingly more rare features being added. For weighting we consider both Positive Mutual Information and T-Test, which have been found to work best in previous research (Bullinaria and Levy, 2012;Curran, 2004). Similarity is computed using Cosine. The results in Figure 1 show a clear trend: for both weighting schemes, performance no longer improves after around 50,000 features; in fact, for T-test weighting, and some of the datasets, performance initially declines with an increase in features. Hence we conclude that continuing to add more rare features is detrimental to performance, and that 50,000 features or less will give good performance. An added benefit of smaller vectors is the reduction in computational cost.
Window size
Recent studies have found that the best window size depends on the task at hand. For example, Hill et al. (2013) found that smaller windows work best for measuring similarity of concrete nouns, whereas larger window sizes work better for abstract nouns. Schulte im Walde et al. (2013) found that a large window size worked best for a compositionality dataset of German noun-noun compounds. Similar relations between window size and performance have been found for similar versus related words, as well as for similar versus associated words (Turney and Pantel, 2010).
We experiment with window sizes of 3, 5, 7, 9 and a full sentence. (A window size of n implies n−1 2 words either side of the target word.) We use Positive Mutual Information weighting, Cosine similarity, and vectors of size 50,000 (based on the results from Section 3.1). Figure 2 shows the results for all the similarity datasets, with the aggregated score at the bottom right.
Performance was evaluated on three corpora, in order to answer three questions: Does window size affect performance? Does corpus size interact with window size? Does corpus sub- Figure 2: Impact of window size across three corpora spacing interact with window size? Figure 2 clearly shows the answer to all three questions is "yes". First, ukWaC consistently outperforms the BNC, across all window sizes, indicating that a larger source corpus leads to better performance. Second, we see that the larger ukWaC performs better with smaller window sizes compared to the BNC, with the best ukWaC performance typically being found with a window size of only 3. For the BNC, it appears that a larger window is able to offset the smaller size of corpus to some extent.
We also evaluated on a sub-spaced Wikipedia source corpus similar to Stone et al. (2008), which performs much better with larger window sizes than the BNC or ukWaC. Our explanation for this result is that sub-spacing, resulting from searching for Wikipedia pages with the appropriate target terms, provides a focused, less noisy corpus in which context words some distance from the target word are still relevant to its meaning.
In summary, the highest score is typically achieved with the largest source corpora and smallest window size, with the exception of the much smaller sub-spaced Wikipedia corpus.
Context
The notion of context plays a key role in VSMs. Pado and Lapata (2007) present a comparison of window-based versus dependency-based methods and conclude that dependency-based contexts give better results. We also compare window-based and dependency-based models.
Dependency-parsed versions of the BNC and ukWaC were used to construct syntacticallyinformed vectors, with a single, labelled arc be- Figure 3: Window versus dependency contexts tween the target word and context word. 3 Since this effectively provides a window size of 3, we also use a window size of 3 for the window-based method (which provided the best results in Section 3.2 with the ukWaC corpus). As well as the ukWaC and BNC source corpora, we use the Google syntactic N-gram corpus (Goldberg and Orwant, 2013), which is one of the largest corpora to date, and which consists of syntactic ngrams as opposed to window-based n-grams. We use vectors of size 50,000 with Positive Mutual Information weighting and Cosine similarity. Due to its size and associated computational cost, we used only 10,000 contexts for the vectors generated from the syntactic N-gram corpus. The results are shown in Figure 3.
In contrast to the idea that dependency-based methods outperform window-based methods, we find that the window-based models outperform dependency-based models when they are constructed from the same corpus using the small window size. However, Google's syntactic Ngram corpus does indeed outperform windowbased methods, even though smaller vectors were used for the Google models (10,000 vs. 50,000 features). We observe large variations across datasets, with window-based methods performing particularly well on some, but not all. In particular, window-based methods clearly outperform dependency-based methods on the RG dataset (for the same source corpus), whereas the opposite trend is observed for the TOEFL synonym dataset. The summary is that the model built from the syntactic N-grams is the overall winner, but when we 3 The Clark and Curran (2007) parser was used to provide the dependencies. compare both methods on the same corpus, the window-based method on a large corpus appears to work best (given the small window size).
Feature granularity
Stemming and lemmatisation are standard techniques in NLP and IR to reduce data sparsity. However, with large enough corpora it may be that the loss of information through generalisation hurts performance. In fact, it may be that increased granularity -through the use of grammatical tags -can lead to improved performance. We test these hypotheses by comparing four types of processed context words: lemmatised, stemmed, POS-tagged, and tagged with CCG lexical categories (which can be thought of as fine-grained POS tags (Clark and Curran, 2007)). 4 The source corpora are BNC and ukWaC, using a windowbased method with windows of size 5, Positive Mutual Information weighting, vectors of size 50,000 and Cosine similarity. The results are reported in Figure 4.
The ukWaC-generated vectors outperform the BNC-generated ones on all but a single instance for each of the granularities. Stemming yields the best overall performance, and increasing the granularity does not lead to better results. Even with a very large corpus like ukWaC, stemming yields signficantly better results than not reducing the feature granularity at all. Conversely, apart from the results on the TOEFL synonym dataset, increasing the feature granularity of contexts by including POS tags or CCG categories does not yield any improvement.
Similarity-weighting combination
There is contrasting evidence in the literature regarding which combination of similarity metric and weighting scheme works best. Here we investigate this question using vectors of size 50,000, no processing of the context features (i.e., "normal" feature granularity), and a window-based method with a window size of 5. Aggregated scores across the datasets are reported in Tables 4 and 5 for the BNC and ukWaC, respectively.
There are some clear messages to be taken from these large tables of results. First, two weighting schemes perform better than the others: Positive Mutual Information (PosMI) and T-Test. On the BNC, the former yields the best results. There are Table 6: Similarity scores on individual datasets for positive mutual information (P) and T-test (T) weighting, with cosine (COS) and correlation (COR) similarity three similarity metrics that perform particularly well: Cosine, Correlation and the Tanimoto coefficient (the latter also being similar to Cosine; see Table 2). The Correlation similarity metric has the most consistent performance across the different weighting schemes, and yields the highest score for both corpora. The most consistent weighting scheme across the two source corpora and similarity metrics appears to be PosMI. The highest combined aggregate score is that of PosMI with the Correlation metric, in line with the conclusion of Bullinaria and Levy (2012) that PosMI is the best weighting scheme 5 . However, for the large ukWaC corpus, T-Test achieves similarly high aggregate scores, in line with the work of Curran (2004). When we look at these two weighting schemes in more detail, we see that T-Test works best for the RG and MC datasets, while PosMI works best for the others; see Table 6. Correlation is the best similarity metric in all cases. Figure 5: Finding the optimal "contiguous subvector" of size 10,000
Optimal subvector
Stopwords are typically removed from vectors and not used as features. However, Bullinaria and Levy (2012) find that removing stopwords has no effect on performance. A possible explanation is that, since they are using a weighting scheme, the weights of stopwords are low enough that they have effectively been removed anyhow. This raises the question: are we removing stopwords because they contribute little towards the meaning of the target word, or are we removing them because they have high frequency?
The experiment used ukWaC, with a windowbased method and window size of 5, normal feature granularity, Cosine similarity and a sliding vector of size 10,000. Having a sliding vector implies that we throw away up to the first 40,000 contexts as we slide across to the 50,000 mark (replacing the higher frequency contexts with lower frequency ones). In effect, we are trying to find the cut-off point where the 10,000-component "contiguous subvector" of the target word vector is optimal (where the features are ordered by frequency). Results are given for PosMI, T-Test and no weighting at all.
The results are shown in Figure 5. T-test outperforms PosMI at the higher frequency ranges (to the left of the plots) but PosMI gives better results for some of the datasets further to the right. For both weighting schemes the performance decreases as high frequency contexts are replaced with lower frequency contexts.
A different picture emerges when no weighting is used, however. Here the performance can increase as high-frequency contexts are replaced Table 5: Aggregated scores for combinations of weighting schemes and similarity metrics using ukWaC with lower-frequency ones, with optimal performance comparable to when weighting is used. There are some scenarios where it may be advantageous not to use weighting, for example in an online setting where the total set of vectors is not fixed; in situations where use of a dimensionality reduction technique does not directly allow for weighting, such as random indexing (Sahlgren, 2006); as well as in settings where calculating weights is too expensive. Where to stop the sliding window varies with the datasets, however, and so our conclusion is that the default scheme should be weighting plus high frequency contexts.
Compositionality
In order to examine whether optimal parameters carry over to vectors that are combined into phrasal vectors using a composition operator, we perform a subset of our experiments on the canonical compositionality dataset from Mitchell and Lapata (2010), using vector addition and pointwise multiplication (the best performing operators in the original study). We evaluate using two source corpora (the BNC and ukWaC) and two window sizes (small, with a window size of 3; and big, where the full sentence is the window). In addition to the weighting schemes from the previous experiment, we include Mitchell & Lapata's own weighting scheme, which (in our notation) is defined as
w ij = f ij ×N f i ×f j .
While all weighting schemes and similarity metrics were tested, we report only the best performing ones: correlations below 0.5 were ommitted for the sake of brevity. Table 7 shows the results.
We find that many of our findings continue to hold. PosMI and T-Test are the best performing weighting schemes, together with Mitchell & Lapata's own weighting scheme. We find that addition outperforms multiplication (contrary to the original study) and that small window sizes work best, except in the VO case. Performance across corpora is comparable. The best performing similarity metrics are Cosine and Correlation, with the latter having a slight edge over the former.
Conclusion
Our experiments were designed to investigate a wide range of VSM parameters, using a variety of evaluation tasks and several source corpora. Across each of the experiments, results are competitive with the state of the art. Some important messages can be taken away from this study:
Experiment 1 Larger vectors do not always lead to better performance. As vector size increases, performance stabilises, and a vector size of around 50,000 appears to be optimal.
Experiment 2 The size of the window has a clear impact on performance: a large corpus with a small window size performs best, but high performance can be achieved on a small subspaced corpus, if the window size is large.
Experiment 3 The size of the source corpus is more important than whether the model is window-or dependency-based. Window-based methods with a window size of 3 yield better results than dependency-based methods with a window of 3 (i.e. having a single arc). The Google Syntactic N-gram corpus yields very good perfor-mance, but it is unclear whether this is due to being dependency-based or being very large.
Experiment 4
The granularity of the context words has a relatively low impact on performance, but stemming yields the best results.
Experiment 5 The optimal combination of weighting scheme and similarity metric is Positive Mutual Information with a mean-adjusted version of Cosine that we have called Correlation. Another high-performing weighting scheme is T-Test, which works better for smaller vector sizes. The Correlation similarity metric consistently outperforms Cosine, and we recommend its use.
Experiment 6 Use of a weighting scheme obviates the need for removing high-frequency features. Without weighting, many of the highfrequency features should be removed. However, if weighting is an option we recommend its use.
Compositionality The best parameters for individual vectors generally carry over to a compositional similarity task where phrasal similarity is evaluated by combining vectors into phrasal vectors.
Furthermore, we observe that in general performance increases as source corpus size increases, so we recommend using a corpus such as ukWaC over smaller corpora like the BNC. Likewise, since the MEN dataset is the largest similarity dataset available and mirrors our aggregate score the best across the various experiments, we recommend evaluating on that similarity task if only a single dataset is used for evaluation.
Obvious extensions include an analysis of the performance of the various dimensionality reduction techniques, examining the importance of window size and feature granularity for dependencybased methods, and further exploring the relation between the size and frequency distribution of a corpus together with the optimal characteristics (such as the high-frequency cut-off point) of vectors generated from that source.
Figure 1 :
1Impact of vector size on performance across different datasets
Figure 4 :
4Feature granularity: stemmed (S), lemmatised (L), normal (N), POS-tagged (T) and CCG-tagged (C)
and the TOEFL synonym datasetDataset
Pairings Words
RG
65
48
MC
30
39
W353
353
437
MEN
3000
751
TOEFL
80
400
M&L10 324
314
Table 1 :
1Datasets for evaluation(Landauer and Dumais, 1997). There is a risk
that semantic similarity studies have been overfit-
ting to their idiosyncracies, so in this study we
evaluate on a variety of datasets: in addition to
WordSim353 (W353) and TOEFL, we also use
the Rubenstein & Goodenough (RG) (1965) and
Miller & Charles (MC) (1991) data, as well as
a much larger set of similarity ratings: the MEN
dataset
Table 2 :
2Similarity measures between vectors v and u, where v i is the ith component of v whether removing more context words, based on a frequency cut-off, can improve performance.
Table 4 :
4Aggregated scores for combinations of weighting schemes and similarity metrics using the BNC.
The similarity metrics are Cosine (COS), Correlation (COR), Dice (DIC), Jaccard (JC1), Jaccard2 (JC2),
Tanimoto (TAN), Lin (LIN), Euclidean (EUC), CityBlock (CIB), Chebyshev (CHS), Jensen-Shannon
Divergence (JSD) and α-skew (ASK)
ukWaC
COS COR DIC JC1
JC2
TAN LIN EUC CIB CHS JSD ASK
none
0.55
0.55
0.28 0.35 0.24 0.41
0.31 0.06
0.09 0.08
0.56 0.49
tfidf
0.45
0.47
0.26 0.30 0.20 0.28
0.22 0.14
0.12 0.16
0.37 0.27
tficf
0.45
0.49
0.27 0.33 0.20 0.29
0.24 0.13
0.11 0.09
0.37 0.28
okapi
0.37
0.42
0.33 0.37 0.18 0.27
0.26 0.26
0.17 0.12
0.34 0.20
atc
0.34
0.42
0.13 0.13 0.08 0.15
0.28 0.10
0.09 0.07
0.28 0.15
ltu
0.43
0.48
0.30 0.34 0.19 0.26
0.25 0.26
0.16 0.24
0.36 0.23
mi
0.51
0.53
0.18 0.51 0.16 0.28
0.37 0.18
0.10 0.09
0.12 nan
posmi
0.67
0.70
0.56 0.62 0.42 0.59
0.52 0.23
0.15 0.06
0.60 0.49
ttest
0.70
0.70
0.16 0.48 0.10 0.70
0.22 0.16
0.11 0.15
nan
nan
chisquared
0.57
0.58
0.52 0.56 0.44 0.52
nan
0.08
0.06 0.10
0.63 0.60
lin98b
0.43
0.63
0.31 0.37 0.20 0.23
0.26 0.09
0.10 nan
0.34 0.24
gref94
0.48
0.54
0.27 0.33 0.20 0.17
0.23 0.13
0.11 0.09
0.38 0.25
Table 7 :
7Selected Spearman ρ scores on the Mitchell & Lapata 2010 compositionality dataset
Another obvious parameter would be dimensionality reduction, which we chose not to include because it does not represent a fundamental aspect of VSM construction: dimensionality reduction relies on some original non-reduced model, and directly depends on its quality.
We will use the term "feature" or "context" or "context word" to refer to contextual elements.
Using NLTK's Porter stemmer and WordNet lemmatiser.
In some cases, the combination of weighting scheme and similarity metric results in a division by zero or leads to taking the logarithm of a negative number, in which cases we report the aggregate scores as nan (not-a-number).
AcknowledgmentsThis work has been supported by EPSRC grant EP/I037512/1. We would like to thank Laura Rimell, Tamara Polajnar and Felix Hill for helpful comments and suggestions.
The WaCky Wide Web: A collection of very large linguistically processed Web-crawled corpora. Marco Baroni, Silvia Bernardini, Adriano Ferraresi, Eros Zanchetta, Language Resources and Evaluation. 433Marco Baroni, Silvia Bernardini, Adriano Ferraresi, and Eros Zanchetta. 2009. The WaCky Wide Web: A collection of very large linguistically processed Web-crawled corpora. Language Resources and Evaluation, 43(3):209-226.
Frege in Space: A program for compositional distributional semantics. Linguistic Issues in Language Technologies (LiLT). Marco Baroni, Raffaella Bernardi, Roberto Zamparelli, Marco Baroni, Raffaella Bernardi, and Roberto Zam- parelli. 2013. Frege in Space: A program for com- positional distributional semantics. Linguistic Is- sues in Language Technologies (LiLT).
Natural Language Processing with Python. Steven Bird, Ewan Klein, Edward Loper, O'Reilly MediaSteven Bird, Ewan Klein, and Edward Loper. 2009. Natural Language Processing with Python. O'Reilly Media.
A Comparison of Vector-based Representations for Semantic Composition. William Blacoe, Mirella Lapata, Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning. the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsWilliam Blacoe and Mirella Lapata. 2012. A Com- parison of Vector-based Representations for Seman- tic Composition. In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Lan- guage Processing and Computational Natural Lan- guage Learning, pages 546-556, Jeju Island, Korea, July. Association for Computational Linguistics.
Distributional Semantics in Technicolor. Elia Bruni, Gemma Boleda, Marco Baroni, N K Tran, Proceedings of the ACL. the ACLElia Bruni, Gemma Boleda, Marco Baroni, and N. K. Tran. 2012. Distributional Semantics in Techni- color. In Proceedings of the ACL 2012.
Extracting Semantic Representations from Word Cooccurrence Statistics: A computational study. John A Bullinaria, Joseph P Levy, Behavior Research Methods. 39John A. Bullinaria and Joseph P. Levy. 2007. Ex- tracting Semantic Representations from Word Co- occurrence Statistics: A computational study. Be- havior Research Methods, 39:510-526.
Extracting Semantic Representations from Word Cooccurrence Statistics. John A Bullinaria, Joseph P Levy, Stop-lists, Stemming and SVD. Behavior Research Methods. 44John A. Bullinaria and Joseph P. Levy. 2012. Ex- tracting Semantic Representations from Word Co- occurrence Statistics: Stop-lists, Stemming and SVD. Behavior Research Methods, 44:890-907.
Reference Guide for the British National Corpus. L Burnard, L. Burnard. 2007. Reference Guide for the British National Corpus.
Wide-Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Stephen Clark, James R Curran, Computational Linguistics. 334Stephen Clark and James R. Curran. 2007. Wide- Coverage Efficient Statistical Parsing with CCG and Log-Linear Models. Computational Linguistics, 33(4):493-552.
Vector Space Models of Lexical Meaning. Stephen Clark, Handbook of Contemporary Semantics. Shalom Lappin and Chris FoxOxfordWiley-Blackwellto appearStephen Clark. 2014. Vector Space Models of Lexical Meaning (to appear). In Shalom Lappin and Chris Fox, editors, Handbook of Contemporary Semantics. Wiley-Blackwell, Oxford.
Improvements in Automatic Thesaurus Extraction. R James, Marc Curran, Moens, Proceedings of the ACL-02 workshop on Unsupervised lexical acquisition. the ACL-02 workshop on Unsupervised lexical acquisition9Association for Computational LinguisticsJames R. Curran and Marc Moens. 2002a. Improve- ments in Automatic Thesaurus Extraction. In Pro- ceedings of the ACL-02 workshop on Unsupervised lexical acquisition-Volume 9, pages 59-66. Associa- tion for Computational Linguistics.
Scaling Context Space. R James, Marc Curran, Moens, Proceedings of the 40th Annual Meeting on Association for Computational Linguistics. the 40th Annual Meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsJames R. Curran and Marc Moens. 2002b. Scaling Context Space. In Proceedings of the 40th Annual Meeting on Association for Computational Linguis- tics, pages 231-238. Association for Computational Linguistics.
From Distributional to Semantic Similarity. James R Curran, University of EdinburghPh.D. thesisJames R. Curran. 2004. From Distributional to Seman- tic Similarity. Ph.D. thesis, University of Edinburgh.
Vector Space Models of Word Meaning and Phrase Meaning: A Survey. Katrin Erk, Language and Linguistics Compass. 610Katrin Erk. 2012. Vector Space Models of Word Meaning and Phrase Meaning: A Survey. Language and Linguistics Compass, 6(10):635-653.
Placing Search in Context: The Concept Revisited. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, Eytan Ruppin, ACM Transactions on Information Systems. 201Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Ey- tan Ruppin. 2002. Placing Search in Context: The Concept Revisited. ACM Transactions on Informa- tion Systems, 20(1):116-131.
A Dataset of Syntactic-Ngrams over Time from a Very Large Corpus of English Books. Yoav Goldberg, Jon Orwant, Proceedings of the Main Conference and the Shared Task: Semantic Textual Similarity. the Main Conference and the Shared Task: Semantic Textual SimilarityAtlanta, Georgia, USAAssociation for Computational Linguistics1Second Joint Conference on Lexical and Computational Semantics (*SEM)Yoav Goldberg and Jon Orwant. 2013. A Dataset of Syntactic-Ngrams over Time from a Very Large Corpus of English Books. In Second Joint Con- ference on Lexical and Computational Semantics (*SEM), Volume 1: Proceedings of the Main Con- ference and the Shared Task: Semantic Textual Simi- larity, pages 241-247, Atlanta, Georgia, USA, June. Association for Computational Linguistics.
Explorations in Automatic Thesaurus Discovery. Gregory Grefenstette, Kluwer Academic PublishersNorwell, MA, USAGregory Grefenstette. 1994. Explorations in Auto- matic Thesaurus Discovery. Kluwer Academic Pub- lishers, Norwell, MA, USA.
. Z Harris, Distributional Structure. Word. 1023Z. Harris. 1954. Distributional Structure. Word, 10(23):146-162.
Concreteness and Corpora: A Theoretical and Practical Analysis. F Hill, D Kiela, A Korhonen, Proceedings of ACL 2013, Workshop on Cognitive Modelling and Computational Linguistics. ACL 2013, Workshop on Cognitive Modelling and Computational LinguisticsSofia, BulgariaF. Hill, D. Kiela, and A. Korhonen. 2013. Con- creteness and Corpora: A Theoretical and Practical Analysis. In Proceedings of ACL 2013, Workshop on Cognitive Modelling and Computational Linguis- tics, Sofia, Bulgaria.
A solution to Platos problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. K Thomas, Susan T Landauer, Dumais, Psychological Review. 1042Thomas K. Landauer and Susan T. Dumais. 1997. A solution to Platos problem: The latent semantic analysis theory of acquisition, induction, and rep- resentation of knowledge. Psychological Review, 104(2):211-240.
Evaluating neighbor rank and distance measures as predictors of semantic priming. Gabriella Lapesa, Stefan Evert, Proceedings of the ACL Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2013). the ACL Workshop on Cognitive Modeling and Computational Linguistics (CMCL 2013)Sofia, BulgariaGabriella Lapesa and Stefan Evert. 2013. Evaluat- ing neighbor rank and distance measures as predic- tors of semantic priming. In In Proceedings of the ACL Workshop on Cognitive Modeling and Compu- tational Linguistics (CMCL 2013), Sofia, Bulgaria.
G A Miller, W G Charles, Contextual Correlates of Semantic Similarity. Language and Cognitive Processes. 6G.A. Miller and W.G. Charles. 1991. Contextual Cor- relates of Semantic Similarity. Language and Cog- nitive Processes, 6(1):1-28.
Composition in Distributional Models of Semantics. Jeff Mitchell, Mirella Lapata, Cognitive Science. 348Jeff Mitchell and Mirella Lapata. 2010. Composition in Distributional Models of Semantics. Cognitive Science, 34(8):1388-1429.
Dependency-based Construction of Semantic Space Models. Sebastian Pado, Mirella Lapata, Computational Linguistics. 332Sebastian Pado and Mirella Lapata. 2007. Dependency-based Construction of Semantic Space Models. Computational Linguistics, 33(2):161-199.
An Improved Model of Semantic Similarity based on Lexical Co-occurence. L T Douglas, Laura M Rohde, David C Gonnerman, Plaut, Communciations of the ACM. 8Douglas L. T. Rohde, Laura M. Gonnerman, and David C. Plaut. 2006. An Improved Model of Se- mantic Similarity based on Lexical Co-occurence. Communciations of the ACM, 8:627-633.
Contextual Correlates of Synonymy. Herbert Rubenstein, John B Goodenough, Commun. ACM. 810Herbert Rubenstein and John B. Goodenough. 1965. Contextual Correlates of Synonymy. Commun. ACM, 8(10):627-633, October.
The Word-Space Model: Using distributional analysis to represent syntagmatic and paradigmatic relations between words in highdimensional vector spaces. Magnus Sahlgren, Department of Linguistics, Stockholm UniversityPh.D. thesisMagnus Sahlgren. 2006. The Word-Space Model: Us- ing distributional analysis to represent syntagmatic and paradigmatic relations between words in high- dimensional vector spaces. Ph.D. thesis, Depart- ment of Linguistics, Stockholm University.
Exploring Vector Space Models to Predict the Compositionality of German Noun-Noun Compounds. Sabine Schulte Im Walde, Stefan Müller, Stephen Roller, Proceedings of the 2nd Joint Conference on Lexical and Computational Semantics. the 2nd Joint Conference on Lexical and Computational SemanticsAtlanta, GASabine Schulte im Walde, Stefan Müller, and Stephen Roller. 2013. Exploring Vector Space Models to Predict the Compositionality of German Noun-Noun Compounds. In Proceedings of the 2nd Joint Con- ference on Lexical and Computational Semantics, pages 255-265, Atlanta, GA.
Averaging Correlation Coefficients: Should Fisher's z Transformation Be Used?. N , Clayton Silver, William P Dunlap, Journal of Applied Psychology. 721N. Clayton Silver and William P. Dunlap. 1987. Av- eraging Correlation Coefficients: Should Fisher's z Transformation Be Used? Journal of Applied Psy- chology, 72(1):146-148, February.
Mark Steedman, The Syntactic Process. Cambridge, MA, USAMIT PressMark Steedman. 2000. The Syntactic Process. MIT Press, Cambridge, MA, USA.
A Systematic Comparison of Semantic Models on Human Similarity Rating Data: The Effectiveness of Subspacing. P Benjamin, Simon J Stone, Peter J Dennis, Kwantes, The Proceedings of the Thirtieth Conference of the Cognitive Science Society. Benjamin P. Stone, Simon J. Dennis, and Peter J. Kwantes. 2008. A Systematic Comparison of Se- mantic Models on Human Similarity Rating Data: The Effectiveness of Subspacing. In The Proceed- ings of the Thirtieth Conference of the Cognitive Sci- ence Society.
From Frequency to Meaning: vector space models of semantics. D Peter, Patrick Turney, Pantel, J. Artif. Int. Res. 371Peter D. Turney and Patrick Pantel. 2010. From Fre- quency to Meaning: vector space models of seman- tics. J. Artif. Int. Res., 37(1):141-188, January.
Characterising Measures of Lexical Distributional Similarity. Julie Weeds, David Weir, Diana Mccarthy, Proceedings of Coling. ColingGeneva, SwitzerlandCOLINGJulie Weeds, David Weir, and Diana McCarthy. 2004. Characterising Measures of Lexical Distributional Similarity. In Proceedings of Coling 2004, pages 1015-1021, Geneva, Switzerland, Aug 23-Aug 27. COLING.
Using Web Searches on Important Words to create Background Sets for LSI Classification. S Zelikovitz, M Kogan, Proceedings of the 19th International FLAIRS Conference. the 19th International FLAIRS ConferenceMenlo Park, CAAAAI PressS. Zelikovitz and M. Kogan. 2006. Using Web Searches on Important Words to create Background Sets for LSI Classification. In In Proceedings of the 19th International FLAIRS Conference, pages 598-603, Menlo Park, CA. AAAI Press. |
1,843,560 | Evaluation Dataset (DT-Grade) and Word Weighting Approach towards Constructed Short Answers Assessment in Tutorial Dialogue Context | Evaluating student answers often requires contextual information, such as previous utterances in conversational tutoring systems. For example, students use coreferences and write elliptical responses, i.e. incomplete but can be interpreted in context. The DT-Grade corpus which we present in this paper consists of short constructed answers extracted from tutorial dialogues between students and an Intelligent Tutoring System and annotated for their correctness in the given context and whether the contextual information was useful. The dataset contains 900 answers (of which about 25% required contextual information to properly interpret them). We also present a baseline system developed to predict the correctness label (such as correct, correct but incomplete) in which weights for the words are assigned based on context. | [
1671874,
15813737,
2988106,
14068874,
691094,
2233498,
11650107,
2245052,
14807742
] | Evaluation Dataset (DT-Grade) and Word Weighting Approach towards Constructed Short Answers Assessment in Tutorial Dialogue Context
June 16
Rajendra Banjade rbanjade@memphis.edu
Department of Computer Science / Institute for Intelligent Systems
The University of Memphis Memphis
TNUSA
Nabin Maharjan nmharjan@memphis.edu
Department of Computer Science / Institute for Intelligent Systems
The University of Memphis Memphis
TNUSA
Nobal B Niraula nbnraula@memphis.edu
Department of Computer Science / Institute for Intelligent Systems
The University of Memphis Memphis
TNUSA
Dipesh Gautam dgautam@memphis.edu
Department of Computer Science / Institute for Intelligent Systems
The University of Memphis Memphis
TNUSA
Borhan Samei bsamei@memphis.edu
Department of Computer Science / Institute for Intelligent Systems
The University of Memphis Memphis
TNUSA
Vasile Rus vrus@memphis.edu
Department of Computer Science / Institute for Intelligent Systems
The University of Memphis Memphis
TNUSA
Evaluation Dataset (DT-Grade) and Word Weighting Approach towards Constructed Short Answers Assessment in Tutorial Dialogue Context
Proceedings of the 11th Workshop on Innovative Use of NLP for Building Educational Applications
the 11th Workshop on Innovative Use of NLP for Building Educational ApplicationsSan Diego, CaliforniaJune 16
Evaluating student answers often requires contextual information, such as previous utterances in conversational tutoring systems. For example, students use coreferences and write elliptical responses, i.e. incomplete but can be interpreted in context. The DT-Grade corpus which we present in this paper consists of short constructed answers extracted from tutorial dialogues between students and an Intelligent Tutoring System and annotated for their correctness in the given context and whether the contextual information was useful. The dataset contains 900 answers (of which about 25% required contextual information to properly interpret them). We also present a baseline system developed to predict the correctness label (such as correct, correct but incomplete) in which weights for the words are assigned based on context.
Introduction
Constructed short answers are responses produced by students to questions, e.g. in a test or in the middle of a tutorial dialogue. Such constructed answers are very different form answers to multiple choice questions where students just choose an option from the given list of choices. In this paper, we present a corpus called DT-Grade 1 which contains constructed short answers generated during interaction with a state-of-the-art conversational Intelligent Tutoring System (ITS) called DeepTutor (Rus et al., 2013;. The main instructional task during tutoring was conceptual problem 1 Available at http://language.memphis.edu/dt-grade solving in the area of Newtonian physics. The answers in our data set are shorter than 100 words. We annotated the instances, i.e. the student generated responses, for correctness using one of the following labels: correct, correct-but-incomplete, contradictory, or incorrect. The student answers were evaluated with respect to target/ideal answers provided by Physics experts while also considering the context of the student-tutor interaction which consists of the Physics problem description and the dialogue history related to that problem. In fact, during annotation we only limited our context to the immediately preceding tutor question and problem description. This decision was based on previous work by Niraula and colleagues (2014) that showed that most of the referring expressions can be resolved by looking at the past utterance; that is, looking at just the previous utterance could be sufficient for our task as considering the full dialogue context would be computationally very expensive.
Automatic answer assessment systems typically assess student responses by measuring how much of the targeted concept is present in the student answer. To this end, subject matter experts create target (or reference) answers to questions that students will be prompted to answer. Almost always, the student responses depend on the context (at least broadly on the context of a particular domain) but it is more prominent in some situations. Particularly in conversational tutoring systems, the meanings of students' responses often depend on the dialogue context and problem/task description. For example, students frequently use pronouns, such as they, he, she, and it, in their response to tutors' questions or other prompts.
In an analysis of tutorial conversation logs, Niraula et al. (2014) found that 68% of the pronouns used by students were referring to entities in the previous utterances or in the problem description. In addition to anaphora, complex coreferences are also employed by students.
Also, in tutorial dialogues students react often with very short answers which are easily interpreted by human tutors as the dialogue context offers support to fill-in the blanks or untold parts. Such elliptical utterances are common in conversations even when the speakers are instructed to produce more syntactically and semantically complete utterances (Carbonell, 1983). By analyzing 900 student responses given to DeepTutor tutoring systems, we have found that about 25% of the answers require some contextual information to properly interpret them.
Problem description: A car windshield collides with a mosquito, squashing it. Tutor question: How do the amounts of the force exerted on the windshield by the mosquito and the force exerted on the mosquito by the windshield compare? Reference answer: The force exerted by the windshield on the mosquito and the force exerted by the mosquito on the windshield are an action-reaction pair. Student answers: A1. Equal A2. The force of the bug hitting the window is much less than the force that the window exerts on the bug A3. they are equal and opposite in direction A4. equal and opposite Table 1: A problem and student answers to the given question.
As illustrated in the Table 1, the student answers may vary greatly. For instance, answer A1 is elliptical. The "bug" in A2 is referring to the mosquito and "they" in A3 is referring to the amount of forces exerted to each other.
In order to foster research in automatic answer assessment in context (also in general), we have annotated 900 student responses gathered from an experiment with the DeepTutor intelligent tutoring system (Rus et al., 2013). Each response was annotated for: (a) their correctness, (b) whether the contextual information was helpful in understanding the student answer, and (c) whether the student answer contains important extra information. The annotation labels, which are similar to the ones proposed by Dzikovska et al. (2013), were chosen such that there is a balance between the level of specificity and the amount of effort required for the annotation.
We also developed a baseline system using semantic similarity approach with word weighting scheme utilizing contextual information.
Related Work
Nielsen et al. (2008) described a representation for reference answers, breaking them into detailed facets and annotating their relationships to the learners answer at finer level. They annotated a corpus (called SCIENTSBANK corpus) containing student answers to assessment questions in 15 different science domains. Sukkarieh and Bolge (2010) introduced an ETS-built test suite towards establishing a benchmark. In the dataset, each target answer is divided into a set of main points (called content) and recommended rubric for assigning score points.
Mohler and Mihalcea (2009) published a collection of short student answers and grades for a course in Computer Science. Most recently, a Semantic Evaluation (SemEval) shared task called Joint Student Response Analysis and 8th Recognizing Textual Entailment Challenge was organized (Dzikovska et al., 2013) to promote and streamline research in this area. The corpus used in the shared task consists of two distinct subsets: BEETLE data, based on transcripts of students interacting with BEETLE II tutorial dialogue system (Dzikovska et al., 2010), and SCIENTSBANK data. Student answers, accompanied with their corresponding questions and reference answers are labeled using five different categories. Basu et al. (2013) created a dataset called Powergrading-1.0 which contains responses from hundreds of Mechanical Turk workers to each of 20 questions from the 100 questions published by the USCIS as preparation for the citizenship test.
Our work differs in several important ways from previous work. Our dataset is annotated paying special attention to context. In addition to the tutor question, we have provided the problem description as well which provides a greater amount of contextual information and we have explicitly marked whether the contextual information was important to properly interpret/annotate the answer. Furthermore, we have annotated whether the student answer contains important extra information. This information is also very useful in building and evaluating natural language tools for automatic answer assessment.
Data Collection and Annotation
Data Collection: We created the DT-Grade dataset by extracting student answers from logged tutorial interactions between 40 junior level college students and the DeepTutor system (Rus et al., 2013). During the interactions, each student solved 9 conceptual physics problems and the interactions were in the form of purely natural language dialogues, i.e., with no mathematical expressions and special symbols. Each problem contained multiple questions including gap-fill questions and short constructed answer questions. As we focused on creating constructed answer assessment dataset with sentential input, we filtered out other types of questions and corresponding student answers. We randomly picked 900 answers for the annotation.
Annotation:
The annotation was conducted by a group of graduate students and researchers who were first trained before being asked to annotate the data. The annotators had access to an annotation manual for their reference. Each annotation example (see Figure 1) contained the following information: (a) problem description (describes the scenario or context), (b) tutor question, (c) student answer in its natural form (i.e., without correcting spelling errors and grammatical errors), (d) list of reference answers for the question. The annotators were asked to read the problem and question to understand the context and to assess the correctness of the student answer with respect to reference answers. Each of the answers has been assigned one of the following labels.
Correct: Answer is fully correct in the context. Extra information, if any, in the answer is not contradicting with the answer.
Correct-but-incomplete: Whatever the student provided is correct but something is missing, i.e. it is not complete. If the answer contains some incorrect part also, the answer is treated as incorrect. Contradictory: Answer is opposite or is very contrasting to the reference answer. For example, "equal", "less", and "greater" are contradictory to each other. However, Newton's first law and Newton's second law are not treated as contradictory since there are many commonalities between these two laws despite their names.
Incorrect: Incorrect in general, i.e. none of the above three judgments is applicable. Contradictory answers can be included in the incorrect set if we want to find all kinds of incorrect answers. As shown in Figure 1, annotators were asked to assign one of the mutually exclusive labels -correct, correct-but-incomplete, contradictory, or incorrect. Also, annotators were told to mark whether contextual information was really important to fully understand a student answer. For instance, the student answer in the Figure 1 contains the phrase "both forces" which is referring to the force of windshield and the force of mosquito in problem description. Therefore, contextual information is useful to fully understand what both forces the student is referring to. As shown in Table 1 (in Section 1), a student answer could be an elliptical sentence (i.e., does not contain complete information on its own). In such cases, annotators were asked to judge the student response based on the available contextual information and reference answers and nothing more; that is, they were explicitly told not to use their own science knowledge to fill-in the missing parts.
If a student response contained extra information (i.e., more information than in the reference/ideal answer provided by experts), we asked annotators to ignore the extra parts unless it expressed a misconception. However, we told annotator to indicate whether the student answer contains some additional important information such as a detailed explanation of their answer. The annotators were encouraged to write comments and asked to set the 'watch' flag whenever they felt a particular student response was special/different. Such 'to watch' instances were considered for further discussions with the entire team to either improve the annotation guidelines or to gain more insights regarding the student assessment task.
The dataset was divided equally among 6 annotators who then annotated independently. In order to reach a good level of inter-annotator agreement in annotation, 30 examples were randomly picked from each annotation subset and reviewed by a supervisor, i.e. one of the creators of the annotation guidelines. The agreements (in terms of Cohen's kappa) in assigning correctness label, identifying whether the context was useful, and identifying whether the student answer contained extra information were 0.891, 0.78, and 0.82 respectively. In another words, there were significant agreements in all components of the annotation. The main disagreement was on how to use the contextual information. The disagreements were discussed among the annotators team and the annotations were revised in few cases.
The Dataset: We have annotated 900 answers. Table 2 offers summary statistics about the dataset. The 40.55% of total answers are correct whereas 59.45% are less than perfect. We can see that approximately 1 in every 4 answers required contextual information to properly evaluate them.
Alignment Based Similarity and Word
Weighting Approach
Approach: Once the dataset was finalized we wanted to get a sense of its difficulty level. We developed a semantic similarity approach in order to assess the correctness of student answers. Specifically, we applied optimal word alignment based method Rus and Lintean, 2012) to calculate the similarity between student answer and the reference answer and then used that score to predict the correctness label using a classifier. In fact, the alignment based systems have been the top performing systems in semantic evaluation challenges on semantic textual similarity (Han et al., 2013;Agirre et al., 2014;Sultan et al., 2015;Agirre et al., 2015). The challenge is to address the linguistic phenomena such as ellipsis and coreferences. An approach can be to use off-the-shelf tools, such as coreference resolution tool included in Stanford CoreNLP Toolkit (Manning et al., 2014). However, we believe that such NLP tools that are developed and evaluated in standard dataset potentially introduce errors in the NLP pipeline where the input texts, such as question answering data, are different from literary style or standard written texts.
As an alternative approach, we assigned a weight for each word based on the context: we gave a low weight to words in the student answer that were also found in the previous utterance, e.g. the tutoring systems question, and more weight to new content. This approach gives less weight to answers that simply repeat the content of the tutors question and more weight to the answers that add the new, asked-for information; as a special case, the approach provides more weight to concise answers (see A1 and A2 in Table 1). The same word can have different weight based on the context. Also, it partially addresses the impact of coreferences in answer grading because the same answer with and without coreferences will be more likely to get comparable scores. The reference answers are usually self contained, i.e. without using coreferring expressions and only those student answers which are also self-contained and similar to reference answer will get higher score. On the other hand, answers using coreferences (such as: they, it) will get lower score unless they are resolved and the student answer becomes similar to reference answer. Giving lower weights to the words, if present in the student answer, for which student could use coreferrences makes these two types of answers somewhat equivalent.
Finally, the similarity score was calculated as:
sim(A, R) = 2 * (a,r)∈OA w a * w r * sim(a, r) a∈A w a + r∈R w r
Where A/R refers to student/reference answer and a/r is a token in it. The sim(a, r) referes to the similarity score between a and r calculated using word2vec model (Mikolov et al., 2013). OA is optimal alignment of words between A and R obtained using Hungarian algorithm as described in . The 0 ≤ w a ≤ 1 and 0 ≤ w r ≤ 1 refer to weight of the word in A and R respectively.
Experiments and Results: In order to avoid noisy alignments, the word-to-word similarity score below 0.4 was set to 0.0 (empirically set). The sim(A, R) was then used with Multinomial Logistic Regression (in Weka) to predict the correctness label. If there were more than one reference answers, we chose one with the highest similarity score with the student answer. We then set different weights (from 1.0 to 0.0) for the words found in tutor utterance (we considered a word was found in the previous utterance if its base form or the synonym found in WordNet 3.0 (Miller, 1995) matched with any of the words in the previous utterance). We changed the weight in the student answer as well as in the reference answer and the impact of weight change in the classification results were assessed using 10-fold cross validation approach. The changes in classification accuracy with changing weights are presented in Figure 2. Giving weight of 1.0 to each word is equivalent to aligning words in student answer with the reference answer without looking at the context. But we can see the improvement in classification accuracy after reducing word weights up to 0.4 (accuracy 49.33%; kappa = 0.22) for the words found in the previous utterance and then decreases. It indicates that the words found in previous utterance should get some weight but new words should get more importance. This approach is somewhat intuitive. But deeper semantic understanding is required in order to improve the performance. For instance, sometimes this word weighting approach infers more information and gives higher weight to the incomplete utterance where students true understanding of the context is hard to predict. Furthermore, it is non-trivial to use additional context, such as problem description including assumptions and graphical illustrations.
Conclusion
We presented a corpus called DT-Grade which contains student answers given to the intelligent tutoring system and annotated for their correctness in context. We explicitly marked whether the contextual information was required to properly understand the student answer. We also annotated whether the answer contains extra information. That additional information can be correct or incorrect as there is no specific reference to compare with but the answer grading systems should be able to handle them.
We also presented a baseline system in which we used semantic similarity generated using optimal alignment with contextual word weighting as feature in the classifier for predicting the correctness label. However, there is enough room for the improvements and using additional features in the classifier or developing a joint inference model such as Markov Logic Network incorporating different linguistic phenomena can be two future directions.
186
Figure 1 :
1An annotation example.
Figure 2 :
2Classification accuracy and weight of the words that are found in the last utterance.
Table 2 :
2Summary of DT-Grade dataset.
AcknowledgmentsThis research was supported by the Institute for Education Sciences (IES) under award R305A100875 to Dr. Vasile Rus. All opinions and findings presented here are solely the authors'.
Semeval-2014 task 10: Multilingual semantic textual similarity. Eneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, Janyce Wiebe, Proceedings of the 8th international workshop on semantic evaluation. the 8th international workshop on semantic evaluationEneko Agirre, Carmen Banea, Claire Cardie, Daniel Cer, Mona Diab, Aitor Gonzalez-Agirre, Weiwei Guo, Rada Mihalcea, German Rigau, and Janyce Wiebe. 2014. Semeval-2014 task 10: Multilingual semantic textual similarity. In Proceedings of the 8th inter- national workshop on semantic evaluation (SemEval 2014), pages 81-91.
Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. Eneko Agirre, Carmen Baneab, Claire Cardiec, Daniel Cerd, Mona Diabe, Aitor Gonzalez-Agirrea, Weiwei Guof, Inigo Lopez-Gazpioa, Montse Maritxalara, Rada Mihalceab, Proceedings of the 9th international workshop on semantic evaluation. the 9th international workshop on semantic evaluationEneko Agirre, Carmen Baneab, Claire Cardiec, Daniel Cerd, Mona Diabe, Aitor Gonzalez-Agirrea, Wei- wei Guof, Inigo Lopez-Gazpioa, Montse Maritxalara, Rada Mihalceab, et al. 2015. Semeval-2015 task 2: Semantic textual similarity, english, spanish and pilot on interpretability. In Proceedings of the 9th inter- national workshop on semantic evaluation (SemEval 2015), pages 252-263.
Nerosim: A system for measuring and interpreting semantic textual similarity. Rajendra Banjade, B Nobal, Nabin Niraula, Vasile Maharjan, Dan Rus, Mihai Stefanescu, Dipesh Lintean, Gautam, 164Rajendra Banjade, Nobal B Niraula, Nabin Mahar- jan, Vasile Rus, Dan Stefanescu, Mihai Lintean, and Dipesh Gautam. 2015. Nerosim: A system for measuring and interpreting semantic textual similarity. SemEval-2015, page 164.
Powergrading: a clustering approach to amplify human effort for short answer grading. Sumit Basu, Chuck Jacobs, Lucy Vanderwende, Transactions of the Association for Computational Linguistics. 1Sumit Basu, Chuck Jacobs, and Lucy Vanderwende. 2013. Powergrading: a clustering approach to amplify human effort for short answer grading. Transactions of the Association for Computational Linguistics, 1:391- 402.
Discourse pragmatics and ellipsis resolution in task-oriented natural language interfaces. G Jaime, Carbonell, Proceedings of the 21st annual meeting on Association for Computational Linguistics. the 21st annual meeting on Association for Computational LinguisticsAssociation for Computational LinguisticsJaime G Carbonell. 1983. Discourse pragmatics and el- lipsis resolution in task-oriented natural language in- terfaces. In Proceedings of the 21st annual meeting on Association for Computational Linguistics, pages 164-168. Association for Computational Linguistics.
Beetle ii: a system for tutoring and computational linguistics experimentation. Johanna D Myroslava O Dzikovska, Natalie Moore, Gwendolyn Steinhauser, Elaine Campbell, Charles B Farrow, Callaway, Proceedings of the ACL 2010 System Demonstrations. the ACL 2010 System DemonstrationsAssociation for Computational LinguisticsMyroslava O Dzikovska, Johanna D Moore, Natalie Steinhauser, Gwendolyn Campbell, Elaine Farrow, and Charles B Callaway. 2010. Beetle ii: a sys- tem for tutoring and computational linguistics exper- imentation. In Proceedings of the ACL 2010 System Demonstrations, pages 13-18. Association for Com- putational Linguistics.
Semeval-2013 task 7: The joint student response analysis and 8th recognizing textual entailment challenge. O Myroslava, Rodney D Dzikovska, Chris Nielsen, Claudia Brew, Danilo Leacock, Luisa Giampiccolo, Peter Bentivogli, Ido Clark, Hoa T Dagan, Dang, DTIC DocumentTechnical reportMyroslava O Dzikovska, Rodney D Nielsen, Chris Brew, Claudia Leacock, Danilo Giampiccolo, Luisa Ben- tivogli, Peter Clark, Ido Dagan, and Hoa T Dang. 2013. Semeval-2013 task 7: The joint student re- sponse analysis and 8th recognizing textual entailment challenge. Technical report, DTIC Document.
Umbc ebiquitycore: Semantic textual similarity systems. Lushan Han, Abhay Kashyap, Tim Finin, James Mayfield, Jonathan Weese, Proceedings of the Second Joint Conference on Lexical and Computational Semantics. the Second Joint Conference on Lexical and Computational Semantics1Lushan Han, Abhay Kashyap, Tim Finin, James May- field, and Jonathan Weese. 2013. Umbc ebiquity- core: Semantic textual similarity systems. In Proceed- ings of the Second Joint Conference on Lexical and Computational Semantics, volume 1, pages 44-52.
The stanford corenlp natural language processing toolkit. D Christopher, Mihai Manning, John Surdeanu, Jenny Rose Bauer, Steven Finkel, David Bethard, Mc-Closky, ACL (System Demonstrations). Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Rose Finkel, Steven Bethard, and David Mc- Closky. 2014. The stanford corenlp natural language processing toolkit. In ACL (System Demonstrations), pages 55-60.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representa- tions in vector space. arXiv preprint arXiv:1301.3781.
Wordnet: a lexical database for english. A George, Miller, Communications of the ACM. 3811George A Miller. 1995. Wordnet: a lexical database for english. Communications of the ACM, 38(11):39-41.
Text-to-text semantic similarity for automatic short answer grading. Michael Mohler, Rada Mihalcea, Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics. the 12th Conference of the European Chapter of the Association for Computational LinguisticsAssociation for Computational LinguisticsMichael Mohler and Rada Mihalcea. 2009. Text-to-text semantic similarity for automatic short answer grad- ing. In Proceedings of the 12th Conference of the Eu- ropean Chapter of the Association for Computational Linguistics, pages 567-575. Association for Computa- tional Linguistics.
Annotating students' understanding of science concepts. Wayne Rodney D Nielsen, Ward, H James, Martha Martin, Palmer, LREC. Rodney D Nielsen, Wayne Ward, James H Martin, and Martha Palmer. 2008. Annotating students' under- standing of science concepts. In LREC.
The dare corpus: A resource for anaphora resolution in dialogue based intelligent tutoring systems. B Nobal, Vasile Niraula, Rajendra Rus, Dan Banjade, William Stefanescu, Brent Baggett, Morgan, LREC. Nobal B Niraula, Vasile Rus, Rajendra Banjade, Dan Ste- fanescu, William Baggett, and Brent Morgan. 2014. The dare corpus: A resource for anaphora resolution in dialogue based intelligent tutoring systems. In LREC, pages 3199-3203.
A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. Vasile Rus, Mihai Lintean, Proceedings of the Seventh Workshop on Building Educational Applications Using NLP. the Seventh Workshop on Building Educational Applications Using NLPAssociation for Computational LinguisticsVasile Rus and Mihai Lintean. 2012. A comparison of greedy and optimal assessment of natural language student input using word-to-word similarity metrics. In Proceedings of the Seventh Workshop on Building Educational Applications Using NLP, pages 157-162. Association for Computational Linguistics.
Recent advances in conversational intelligent tutoring systems. Sidney Vasile Rus, Xiangen Dmello, Arthur Hu, Graesser, AI magazine. 343Vasile Rus, Sidney DMello, Xiangen Hu, and Arthur Graesser. 2013. Recent advances in conversational intelligent tutoring systems. AI magazine, 34(3):42- 54.
Deeptutor: An effective, online intelligent tutoring system that promotes deep learning. Nobal Vasile Rus, Rajendra Niraula, Banjade, Twenty-Ninth AAAI Conference on Artificial Intelligence. Vasile Rus, Nobal Niraula, and Rajendra Banjade. 2015. Deeptutor: An effective, online intelligent tutoring system that promotes deep learning. In Twenty-Ninth AAAI Conference on Artificial Intelligence.
Building a textual entailment suite for the evaluation of automatic content scoring technologies. Z Jana, Eleanor Sukkarieh, Bolge, LREC. CiteseerJana Z Sukkarieh and Eleanor Bolge. 2010. Building a textual entailment suite for the evaluation of automatic content scoring technologies. In LREC. Citeseer.
Dls@cu: Sentence similarity from word alignment and semantic vector composition. Steven Md Arafat Sultan, Tamara Bethard, Sumner, Proceedings of the 9th International Workshop on Semantic Evaluation. the 9th International Workshop on Semantic EvaluationMd Arafat Sultan, Steven Bethard, and Tamara Sumner. 2015. Dls@cu: Sentence similarity from word align- ment and semantic vector composition. In Proceed- ings of the 9th International Workshop on Semantic Evaluation, pages 148-153. |
220,445,385 | [] | BIT's system for the AutoSimTrans 2020
July 10, 2020
Minqin Li lmqminqinli@163.com
Beijing Institute of Technology
BeijingChina
Haodong Cheng
Beijing Institute of Technology
BeijingChina
Yuanjie Wang
Beijing Institute of Technology
BeijingChina
Sijia Zhang
Beijing Institute of Technology
BeijingChina
Liting Wu
Beijing Institute of Technology
BeijingChina
Yuhang Guo guoyuhang@bit.edu.cn
Beijing Institute of Technology
BeijingChina
BIT's system for the AutoSimTrans 2020
Proceedings of the 1st Workshop on Automatic Simultaneous Translation
the 1st Workshop on Automatic Simultaneous TranslationJuly 10, 202037
This paper describes our machine translation systems for the streaming Chinese-to-English translation task of AutoSimTrans 2020. We present a sentence length based method and a sentence boundary detection model based method for the streaming input segmentation. Experimental results of the transcription and the ASR output translation on the development data sets show that the translation system with the detection model based method outperforms the one with the length based method in BLEU score by 1.19 and 0.99 respectively under similar or better latency.
Introduction
Automatic simultaneous machine translation is a useful technique in many speech translation scenarios. Compared with traditional machine translations, simultaneous translation focuses on processing streaming inputs of spoken language and achieving low latency translations. Two challenges have to be faced in this task. On one hand, few parallel corpora in spoken language domain are open available, which leads to the fact that the translation performance is not as good as in general domain. On the other hand, traditional machine translation takes a full sentence as input so that the latency of the translation is relatively long.
To deal with the shortage of the spoken language corpora, we pre-train a machine translation model on general domain corpus and then fine-tune this model with limited spoken language corpora. We also augment the spoken language corpora with different strategies to increase the in-domain corpora.
In order to reduce the translation latency, we use three sentence segmentation methods: a punctuation based method, a length based method and a sentence boundary detection model based method. All of the methods can split the input source sentence into short pieces, which makes the translation model obtain low latency translations.
In the streaming automatic speech recognition(ASR) output track for the Chineseto-English translation task of AutoSimTrans 2020, most of our proposed systems outperform the baseline systems in BLEU score and the sentence boundary detection model based sentence segmentation method abstains higher BLEU score than the length based method under similar latency.
Task Description
We participated in the streaming Chineseto-English translation task of AutoSimTrans 2020 1 : the streaming ASR output translation track and the streaming transcription translation track. The two tracks are similar except that the ASR output may contain error results and includes no internal punctuation but end punctuation. Table 1 shows an example of the streaming ASR output translation.
Approaches
Our all systems can be divided into 3 parts: data preprocessing, sentence segmentation and translation. Data preprocessing includes data cleaning, data augmentation. We implement 3 sentence segmentation methods, which are based on punctuation, sentence length and a sentence boundary detection model. The training of translation model includes pretraining out of domain and fine-tuning in domain.
Streaming ASR output Translation
Hello everyone. Welcome everyone to come , here.
Data Cleaning
Noises in large-scale parallel corpus are almost inevitable. We clean the parallel corpus for the training. Here we mainly focus on the missaligned errors in the training corpus. We find that in the CWMT19 zh-en data set, some of the target sentences are not in English, but in Chinese, Japanese, French or some other noisy form. We suspect these small noises may affect the training of the model. Inspired by Bérard et al. (2019), we apply a language detection script, langid.py 2 , to the source and the target sentence of the CWMT19 data set separately. Sentence pairs which are not matched with their expected languages are deleted. The corpus are then cleaned by the tensor2tensor 3 module by default. Eventually the CWMT19 corpus are then filtered from 9,023,708 pairs into 7,227,510 pairs after data cleaning.
Data Augmentation
Insufficiency of training data is common in spoken language translation, and many data augmentation methods are used to alleviate this problem . In the streaming ASR output translation system, we use the homophone substitution method to augment the training data according to the characteristics of ASR output translation. The results of ASR usually contain errors of homophonic substitution. We randomly replace each character in the source language part of the training corpus with probability p with its homophones to improve the generalization ability of the system. As shown in Table 2, we find characters that are homophonic with the selected characters, sample them according to the probability that these characters appear in the corpus, and substitute them to the corresponding positions. The data augmentation is only used in our MT model's training because of the insufficiency of training data in spoken language domain.
Similarly, we randomly substitute words in the source language sentences with the homophone substitution. The result of this substitution is closer to the real speech recognition result. As shown in Table 3. We first split the sentence in the source language into a word sequence, determine whether to replace each word with its homophones by probability p, and then sample them according to the distribution of homophones in a corpus. Finally we replace to the corresponding position.
In this system, we adopt the character and the word frequency distribution in an ASR corpus, the AISHELL-2 corpus (Du et al., 2018), and set the substitution probability p = 0.3.
Sentence Segmentation
Low latency is important to simultaneous machine translation. Our systems are closed to low latency translation by splitting long input word sequences into short ones. We use three sentence segmentation methods in this work, namely, punctuation based sentence segmentation (PSS), length based sentence segmentation (LSS), and sentence boundary detection model based sentence segmentation (MSS).
PSS
In the punctuation based sentence segmentation method we put the streaming input tokens into a buffer one by one. When the input token is a punctuation, the word sequence in the buffer is translated. Then the buffer is cleared and we put the next tokens into it. The above procedure repeats until the end of the streaming inputs.
LSS In our length based sentence segmentation method we put the steaming input tokens into a buffer one by one. When the input token is a punctuation or the sequence length in the buffer reaches a threshold L, the word sequence in the buffer except the last word is translated in case of the last word is an in complete one. The translated part in the buffer is then cleared and then we put the next tokens
Original Chinese (she) English
This society society hasn't trust it doesn't work Substitution (she) English
This suppose society hasn't newcomers it doesn't work Table 2: A randomly selected single character (in red bold font) is substituted by its homophonic character. The corresponding pinyin is included in the bracket.
Original Chinese (xinren) English
This society hasn't trust it doesn't work Substitution
(xinren) English
This society hasn't newcomers it doesn't work Table 3: A randomly selected word (in red bold font) is substituted by its homophonic word. The corresponding pinyin is included in the bracket.
into the buffer. The above procedure repeats until the end of the streaming inputs.
Text Label 0 So we think that free 1 So we think that free is only temporary MSS Apparently many translation inputs with the LSS are incomplete sentences fragments because of the hard sentence segmentation. Here we propose a sentence boundary detection model for the sentence segmentation. We build this model on the top of a pretraining model, BERT (Devlin et al., 2018). Our model is built by adding two layers of full connected network to the Chinese BERT pre-training model. The training data set is constructed using all transcription pairs provided by the organizer. For the sentences in transcriptions, we use a punctuation set, {, . ! ? }, as the sentence boundary indicators to obtain complete sentences, which are used as positive samples. And then we sample incomplete fragments from the above sentences uniformly to obtain negative samples. The ratio of the positive sample to the negative sample is 1 : 4. We apply the sentence boundary detection model to streaming ASR output translation. The model returns the prediction to each streaming sequence as a judgment condition for whether it is to be translated. However, we should not set the segmentation point at the first position of the detection. Suppose a detected sentence boundary position is i and the next detected boundary position is i + 1. This means both of the prefix word sequences w 1:i and w 1:i+1 can be seen as a complete sentence. Usually the boundary position i + 1 is better than i. Generally we set a rule that position i is a sentence boundary if the sentence boundary detection model returns true for position i and false for i + 1. In this way, the word sequence (i.e. w 1:i ) is feed to the translation system when it is detected and the untranslated part (i.e. w i+1 ) will be translated in the next sentence. For example, the position i of streaming inputs in Table 5 are detected to boundary's position finally only when the position i is detected to boundary by model while the next position i + 1 isn't detected to boundary by model.
Pre-training and Fine-tuning
Pre-training and fine-tuning are the most popular training methods in the field of deep learning. It has been proved that this training mode is very effective in improving the performance of the model and is very simple to implement. Therefore, we use the CWMT19
Position Sentence
Return of model Boundary
i − 2 False 0 i − 1 True 0 i True 1 i + 1
False 0 data set to pre-train a base-model, and then use the speech translation data provided by the organizer to fine-tune the model. We first train a basic Transformer translation model with CWMT19 data set. In order to adapt to the spoken language domain, we directly fine-tune the pre-trained model on the transcriptions or ASR outputs provided by the organizer and our augmented data. We train our model with the CWMT19 zhen data set, the streaming transcription and the streaming ASR output data sets provided by the evaluation organizer. Because of the evaluation track limit, we did not use the UN parallel corpus and the News Commentary corpus although they were used in the baseline. The CWMT19 zh-en data set includes six sub data sets: the casia2015 corpus, the casict2011 corpus, the casict2015 corpus, the datum2015 corpus, the datum2017 corpus and the neu2017 corpus. The CWMT19 data set contains totally 9,023,708 parallel sentences. They are used in the pre-training of our model. Streaming transcription and streaming ASR output data sets are provided by the evaluation organizer. The transcription data set contains 37,901 pairs and the ASR output data set contains 202,237 pairs. We use them as the fine-tuning data to adapt to the spoken language. Finally we evaluate our system on the development set which contains 956 pairs. The size of the data set is listed in Table 6.
Experiments
Data Sets
System Settings
Our model is based on the transformer in tensor2tensor.
We set the parameters of the model as transf ormer_big. And we set the parameter problem as translate_enzh_wmt32k_rev. We train the model on 6 RTX-Titan GPUs for 9 days. Then we use the transcription data and the ASR output data to fine-tune the model respectively on 2 GPUs. We fine-tune the model until it overfits.
Baseline Model
The baseline model 4 (Ma et al., 2018) provided by the evaluation organizer is trained on the WMT18 zh-en data set, including CWMT19, the UN parallel corpus, and the News Commentary corpus. The baseline model uses the transformer which is essentially the same as the base model from the original paper (Vaswani et al., 2017). It applied a Prefix-to-Prefix architecture and Wait-K strategy to the transformer. We test the Wait-1, Wait-3 and the FULL model with fine-tuning on domain data as the comparison to our system. For the Wait-1, Wait-3 setting, the baseline fine-tunes 30,000 steps. For the FULL setting, the baseline fine-tunes 40,000 steps. Ma et al. (2018) uses Average Lagging (AL) as the latency metric. They defined:
Latency Metric: Average Lagging
AL g (x, y) = 1 τ g (|x|) τg(|x|) ∑ t=1 g(t) − t − 1 r(1)
Where τ g (|x|) denotes the cut-off step which is the decoding step when source sentence finishes, g(t) denotes the number of source words processed by the encoder when deciding the target word y t , and r = |x|/|y| is the target-tosource length ratio. The lower the AL value, the lower the delay, the better the real-time system.
Results
Streaming Transcription Translation
The results of our streaming transcription system on the development data set are shown in Table 7. FT-Trans indicates the fine-tuning data set including the original transcriptions and the transcriptions without punctuation (i.e. the depunctuation version). LSS-L indicates the system with the length based sentence segmentation method and the threshold for the length is L. PSS indicates the system with our punctuation based sentence segmentation method. MSS indicates the system with our sentence boundary detection model based sentence segmentation method. Wait-1, Wait-3 and FULL indicate the different settings of the baseline systems. Among these settings, the best AL score is from the Wait-1 baseline and the best BLEU score is from our PSS system. Under similar BLEU score, LSS-17 obtains better AL score than the FULL baseline. Both of the AL and the BLEU score of the LSS-L system grow up with L increases. The MSS system performs better BLEU score by 1.19 than the LSS-L system under similar AL score (i.e. MSS vs. LSS-12). Finally we submitted the PSS setting system because of its high BLEU score and relatively low AL latency compared with the FULL baseline.
Streaming ASR Output Translation
The translation performances on the streaming ASR output are shown in Table 8. FT-ASR represents the systems are fine-tuned on the combination of the ASR output and the ASR output without punctuation. FT-ASR+Aug represents the fine-tuning set includes the FT-ASR, the homophone substitution augmented transcriptions, and their depunctuation version. FT-ASR+Aug+Trans represents the fine-tuning set contains the FT-ASR+Aug and the transcriptions and their depunctuation version. As shown in Table 8, all of our systems outperform the Wait-1, Wait-3 settings of the baseline in BLEU score and our MSS model outperforms the FULL baseline. As more data is added to the fine-tuning set, the performances of the systems will increase accordingly. Both LSS-15 and PSS in FT-ASR+Aug outperform the corresponding systems in FT-ASR, which indicates the effectiveness of the data augmentation. The BLEU score of LSS-15(FT-ASR+Aug+Trans) is 2.22 higher than LSS-15(FT-ASR) while the AL latency of former is better than the latter.
In the FT-ASR+Aug+Trans, the sentence boundary detection model based sentence segmentation, MSS, obtains higher (i.e. +0.99) BLEU score and lower (i.e. -1.06) AL latency than the LSS-15. The BLEU score of MSS is lower than PSS by 1.46 but the latency is improved by 15.88.
Compared with the results of transcription translation of FT-Trans in Table 7, the BLEU scores of the ASR outputs translations relatively decreased. This indicates the effects of the cascade error of the ASR systems.
The latency of the LSS in Table 7 and Table 8 are close. The latency of PSS increased from 10 to around 22. This indicates the lack of punctuation in the ASR outputs.
The MSS system performs close AL latency and less BLEU score drops in transcription and ASR outputs translation. At last we submitted the MSS system to the evaluation track.
Several examples of the translation in differ- ent systems can be seen in Appendix A.
Related Work
End-to-end machine translation models, such as transformer (Vaswani et al., 2017), greatly promote the progress of machine translation research and have been applied to speech translation researches (Schneider and Waibel, 2019;Srinivasan et al., 2019;Wetesko et al., 2019). Furthermore, several end-to-end based approaches have recently been proposed for simultaneous translations (Zheng et al., 2019b,a). In order to solve the problem of insufficient parallel corpus data for simultaneous translation tasks, Schneider and Waibel (2019) augmented the available training data using backtranslation. Vial et al. (2019) used BERT pretraining model to train a large number of external monolingual data to achieve data augmentation. simulated the input noise of ASR model and used placeholders, homophones and high-frequency words to replace the original parallel corpus at the character level. Inspired by , we augment the training data by randomly replacing the words in the source sentences with homophones.
In order to reduce the translation latency, Ma et al. (2018) used the Prefix-to-Prefix architecture, which predicts the target word with the prefix rather than the whole sequence. Their Wait-K models are used as the baseline and are provided by the shared task organizers. The Wait-K models start to predict the target after the first K source words appear. Zheng et al. (2020) applied ensemble of models trained with a set of Wait-K polices to achieve an adaptive policy. Xiong et al. (2019) have proposed a pre-training based segmentation method which is similar to MSS. However, in the decoding stage, the time complex of this method is O(n 2 ) whereas the time complex of MSS is O(n).
Conclusions
In this paper, we describe our submission systems to the the streaming Chinese-to-English translation task of AutoSimTrans 2020. In this system the translation model is trained on the CWMT19 data set with the transformer modedalvi2018incrementall. We leverage homophonic character and word substitutions to augment the fine-tuning speech transcription data set. We implement a punctuation based, a length based and a sentence boundary detection model based sentence segmentation methods to improve the latency of the translation system. Experimental results on the development data sets show that the punctuation based sentence segmentation obtains the best BLEU score with a reasonable latency on the transcription translation track. The results on the ASR outputs translation show the effectiveness of our data augmentation approaches. And the sentence boundary detection model based sentence segmentation gives the low latency and a stable BLEU score in our all systems. However, because we have no enough time to retrain the MT model, some settings of our system are not consistent with the baseline, so it is difficult to judge whether our method is better than baseline's method. In the future, we will finish this comparative experiment. Baigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019b
A Appendices
We list several translation results to compare our systems with the baselines on the transcription translation track and the ASR output translation track. As shown in Table 9 and 10, missing translation can be observed in the Wait-K baselines and our system.
Source Reference
He has always been ranked among the last, so to speak, the last in those games. What kind of spirit supported him to take part in the competition all the time? For streaming ASR output, as shown in Table 12, missing translation can also be observed in the Wait-K baselines. From Table 13, we can see that in the segmentation of the LSS-15 most of the sentence fragments are incomplete. As shown in Table 14, the segmentation of the MSS is reasonable and the translation is much better than the LSS-15.
System
Translation Wait-1
In his every after shock, he won the game, even in the No.1 games. Wait-3 Every time when he does a match, he will lose, even in the No.1 draw, what is that? FULL In every game, which is not only about the win, but also about the power that comes to the 1st place, those who support him to go on training all the time. FT-Trans(PSS) In every game he lost, in the second countdown, what is it? What was the strength that kept him going? I keep training.
System Translation
Wait-1
So, is everyone wants to fail? Wait-3
Right, everyone never want to fail, and they all want to win every game, even when they are in the second best. FULL That is, to say, every one would never want to win, in every game, or even in the second place, what was the power that supports him to go there and that number? Table 11.
Segmentation Translation
Yes, everyone wants to lose. The winner lost every game. What is second to last? What kind of strength supports him to go on? The game. Table 11 with the setting of LSS-15 on FT-ASR+Aug+Trans.
Segmentation Translation
Right? Everyone doesn't want to lose. They all want to win. In each game, it is losing or even losing. In the second place. What is power? It supports him to go all the way to the game. Table 11 with the setting of MSS on FT-ASR+Aug+Trans.
Table 1 :
1An example of streaming ASR output translations.
Table 4 :
4Examples of the train data set of the model. 1: Complete sentences. 0: Incomplete sentence.
Table 4 illustrates a positive example and a negative example. The training set is of 370k examples, the test set is of 7k examples, and the validation set is of 7k examples. After running 3 epochs, the model converges with an accuracy of 92.5% in the test set.
Table 5 :
5The examples of using model to detect boundaries. 0: Not boundary of sentence, 1: Boundary
of sentence
Table 6 :
6The size of different data sets.
Table 7 :
7The translation results on the development data set of streaming transcriptions.
Table 8 :
8The translation results on the develop-
ment data set of streaming ASR outputs.
. Simultaneous translation with flexible policy via restricted imitation learning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 5816-5822, Florence, Italy. Association for Computational Linguistics.
Table 9 :
9An example of source sentence and reference translation in the transcription translation track.
Table 10 :
10The translations of the sentence inTable 9. Reference Right? Everyone does not want to lose; rather, they all want to win.When he lost every match or even came in the second last or last place, what was it or what kind of strength supported him to compete and train all the time?Source
Table 11 :
11An example of the source sentence and the reference translation in the ASR output translation track.
Table 12 :
12The translations of the sentence in
Table 13 :
13The sentence segmentation and the corresponding translations in
Table 14 :
14The sentence segmentation and the corresponding translations in
https://autosimtrans.github.io/shared
https://github.com/saffsd/langid.py 3 https://github.com/tensorflow/tensor2tensor
https://github.com/autosimtrans/SimulTransBaseline
Acknowledgments
Naver labs europe's systems for the wmt19 machine translation robustness task. Alexandre Bérard, Ioan Calapodescu, Claude Roux, arXiv:1907.06488arXiv preprintAlexandre Bérard, Ioan Calapodescu, and Claude Roux. 2019. Naver labs europe's systems for the wmt19 machine translation robustness task. arXiv preprint arXiv:1907.06488.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, abs/1810.04805CoRRJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. BERT: pre-training of deep bidirectional transformers for language understanding. CoRR, abs/1810.04805.
AISHELL-2: transforming mandarin ASR research into industrial scale. Jiayu Du, Xingyu Na, Xuechen Liu, Hui Bu, abs/1808.10583CoRRJiayu Du, Xingyu Na, Xuechen Liu, and Hui Bu. 2018. AISHELL-2: transforming man- darin ASR research into industrial scale. CoRR, abs/1808.10583.
Improving the robustness of speech translation. Xiang Li, Haiyang Xue, Wei Chen, Yang Liu, Yang Feng, Qun Liu, arXiv:1811.00728arXiv preprintXiang Li, Haiyang Xue, Wei Chen, Yang Liu, Yang Feng, and Qun Liu. 2018. Improving the ro- bustness of speech translation. arXiv preprint arXiv:1811.00728.
STACL: simultaneous translation with integrated anticipation and controllable latency. Mingbo Ma, Liang Huang, Hao Xiong, Kaibo Liu, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, Haifeng Wang, abs/1810.08398CoRRMingbo Ma, Liang Huang, Hao Xiong, Kaibo Liu, Chuanqiang Zhang, Zhongjun He, Hairong Liu, Xing Li, and Haifeng Wang. 2018. STACL: simultaneous translation with integrated an- ticipation and controllable latency. CoRR, abs/1810.08398.
Kit s submission to the iwslt 2019 shared task on text translation. Felix Schneider, Alex Waibel, Proceedings of the 16th International Workshop on Spoken Language Translation. the 16th International Workshop on Spoken Language TranslationFelix Schneider and Alex Waibel. 2019. Kit s sub- mission to the iwslt 2019 shared task on text translation. In Proceedings of the 16th Interna- tional Workshop on Spoken Language Transla- tion.
Cmu s machine translation system for iwslt. Tejas Srinivasan, Ramon Sanabria, Florian Metze, Tejas Srinivasan, Ramon Sanabria, and Florian Metze. 2019. Cmu s machine translation sys- tem for iwslt 2019.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Attention is all you need. CoRR, abs/1706.03762Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. At- tention is all you need. CoRR, abs/1706.03762.
The lig system for the english-czech text translation task of iwslt. Loïc Vial, Benjamin Lecouteux, Didier Schwab, Hang Le, Laurent Besacier, Loïc Vial, Benjamin Lecouteux, Didier Schwab, Hang Le, and Laurent Besacier. 2019. The lig system for the english-czech text translation task of iwslt 2019.
Samsung and university of edinburgh s system for the iwslt. Joanna Wetesko, Marcin Chochowski, Pawel Przybysz, Philip Williams, Roman Grundkiewicz, Rico Sennrich, Barry Haddow, Antonio Valerio Miceli, Alexandra Barone, Birch, Joanna Wetesko, Marcin Chochowski, Pawel Przy- bysz, Philip Williams, Roman Grundkiewicz, Rico Sennrich, Barry Haddow, Antonio Valerio Miceli Barone, and Alexandra Birch. 2019. Sam- sung and university of edinburgh s system for the iwslt 2019.
Dutongchuan: Context-aware translation model for simultaneous interpreting. Hao Xiong, Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, Haifeng Wang, arXiv:1907.12984arXiv preprintHao Xiong, Ruiqing Zhang, Chuanqiang Zhang, Zhongjun He, Hua Wu, and Haifeng Wang. 2019. Dutongchuan: Context-aware transla- tion model for simultaneous interpreting. arXiv preprint arXiv:1907.12984.
Simultaneous translation policies: From fixed to adaptive. Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, Liang Huang, Baigong Zheng, Kaibo Liu, Renjie Zheng, Mingbo Ma, Hairong Liu, and Liang Huang. 2020. Si- multaneous translation policies: From fixed to adaptive.
Simpler and faster learning of adaptive policies for simultaneous translation. Baigong Zheng, Renjie Zheng, Mingbo Ma, Liang Huang, arXiv:1909.01559arXiv preprintBaigong Zheng, Renjie Zheng, Mingbo Ma, and Liang Huang. 2019a. Simpler and faster learning of adaptive policies for simultaneous translation. arXiv preprint arXiv:1909.01559. |
||
28,253,803 | Kimnio Kettunen SITRA Foundation SF-00121 HELSINKI ON MODELLING DEPENDENCY-ORIENTED PARSING | [
9946363,
15605878
] | Kimnio Kettunen SITRA Foundation SF-00121 HELSINKI ON MODELLING DEPENDENCY-ORIENTED PARSING
P O Box
Kimnio Kettunen SITRA Foundation SF-00121 HELSINKI ON MODELLING DEPENDENCY-ORIENTED PARSING
This paper describes some basic linguistical characteristics of the parser, DADA, that has been implemented as a part of a database interface system for written Finnish queries * The parser is general and it is by now capable of analyzing a nontrivial subset of Finnish clauses. The basic idea of the parser is to provide analyzed sentences with syntactico-semantic structure. The structure that is given to an input clause is a functional case-labeled dependency structure. Dependency is stated and interpreted in functional labels which are then further interpreted using semantic roles. Therefore a superficial semantic representation is given to the analyzed sentence.
The following set lists salient features of our parser; -113-On modelling dependency-oriented parsing Kimmo Kettunen Proceedings of NODALIDA 1985, pages 113-120 1 ) strength in grasping word-order variations of an inflectional language. This is due to the dependency grammar and to the implementation that employs two-way automata (cf. Levelt I974).
2 ) Multidimensional analysis: full advantage of the rich inflectional morphology of Finnish is obtained. Rules of grammar are stated so that morphological knowledge as well as knowledge from higher strata may appear in them.
3 ) Parallel syntactic and case-semantic analysis or only syntactic analysis may be obtained as one wishes.
4)
Semi-strict separation of linguistic knowledge and parsing mechanism. This is due to the high-level grammar description language, DPL, in which the grammar is written. The grammar and the parser in their present version have some 30 pages of DPL-description. That makes about 5500 lines of compiled elementary LISP-code.
5)
The parser is strongly data-driven. Parsing proceeds bottom-up and is lexicon-based. Structures are built from words to larger constituents (cf. Winograd 1983)» 6) All the time the parser has only a few rules that must be checked. The only hypothesis made are those, which come with expectations of the current stack.
When rules are activated like this, the size of grammar will not affect the efficiency of the parser.
A sketch of the parsing process
Firstly we shall briefly sketch the overall parsing process.
A morphological analysis precedes the parser. We have a morphological processor that analyzes the inflected words and gives the basic word forms and inflectional categories (cf. Jappinen et al 1983)* This is, of course, a prerequisite for parsing a highly inflected language, such as Finnish. The parser gets morphologically analyzed words with their lexical information. Lexical descriptions come from the parser's own lexicon.
A disambiguator for ambiguous morphological output should also exist somewhere. One place for it could be after morphological analysis (cf. Karlsson 1985b) or the disambiguation could be done during the parse. So far we have none. For each morphologically ambiguous token of a word form the parser tries to find dependents.
This leads to multiple work and possible misparses.
-i i a - 114
Proceedings of NODALIDA 1985
The DADA parser proceeds word-by-word in the input clause. The basic method of the parser is that it tries to recognize possible dependents for each token of a word category moving out from the nearest neighbour. As the parser is modelled with two-way automata, it can recognize subordinates both from the left and right context of the current input word. For each word category possible dependents are described in automaton networks.
We may generalize that during the parse local dependencies are sought first: each word category gathers dependents of its own type. When each non-verbal category has finished gathering its dependents and it is labeled as +Phrase (i.e. it governs > 0 dependents), it may be matched to the global structure of the clause. In non-elliptic sentences (sentences containing a finite verb) the parsing is always finished when some global dependent (dependent of the main verb) is attached and the working stack is empty. Here ?X and X? refer to left and right-hand states, respectively. START is the initial state that sends the analysis to the proper automaton network. Rn's are dependency relations.
Word (Cati) 4- J Rn R n ( ( START -> ?X (WCati) -> X?(Wcati) ■ L > XFin (Wcati) -> New input word is read
Dependency is stated in two steps: for each word-class a set of possible dependents is determined by function names in states. Possible orderings of dependent and regent are stated in left and right-side automaton descripti ons.
Dependency relations are the only rules that are used. A dependency relation concerns a pair of words (C, D) which have to fullfill the requirements stated in the rule. Rules are written as conditions or constraints that have to hold between C and D. Schematically rules are stated as follows: where MorphC = inflectional knowledge SyntCat = syntactic category SyntFeat = a bundle of syntactic features SemCat = name of the semantic class SemFeat = a bundle of distinctive semantic features ConstFeat = knowledge that is figured out during the parse and is stated in binary features FrameCat = frame category of the verb FraraeFeatc= frame features of the verb C = regent candidate D = dependent candidate Features and categories can be written in disjunctive and conjunctive statements to form rules of grammar. When a match between C and D succeeds D is attached to C and labeled with the proper syntactic function label and (possibly) with semantic case role label.
Our kind of description is rather far from the Haysian classical dependency notation. Whereas Hays describes order and restrictions in the same formula (Hays 1964), we have different descriptions for each. Especially the word order restrictions are currently described rather clumsily. Possible word orders are blurred into paths of the automaton network. As a linguistic description this is not satisfying and a new way for describing the word order should be found. The second major problem is that lexical knowledge is stated in two places. At present automaton descriptions work as a kind of valency lexicons which give the possible dependents of each word-category. But it is obvious that this information should be given separately in the lexicon.
. Assigned structures
The parser builds labeled dependency trees which have the following characteristics:
-the linear order of the surface clause is preserved in the trees -heads and their modifiers are attached directly without any non-terminal symbols -116-
116
Proceedings of NODALIDA 1985 -dependency trees have labels of two kinds: syntactic function labels and case role labels. Syntactic functions are the main objects that are created by the dependency relations. Case role labels are further interpretations of these functional relations.
. Dependency
As we know two elements of a sentence are directly related in a dependency characterization if one DEPENDS on (conversely, is GOVERNED by) the other. The relationship is transitive, irreflexive and antisymmetric. A dependency relation is said to hold between two elements in certain circumstances. One member of this relation is called the GOVERNOR (or head or regent), the other the DEPENDENT (or modifier) (cf. e.g. Hudson 1930a, Mel'cuk 1979, Kunze 1975• Two intuitive ideas determine the existence of a dependency relation between the governor and the dependent:
i) The governor expects to find certain types of dependents in its neighbourhood. This expectation is inherent to the element that constitutes the governor, and may be a piece of dictionary kncfwledge.
ii) The governor is felt to be modified by the dependent (not vice versa). Johnson & King & des Tombe (1985) have discussed some basic problems concerning the dependency construction. It is useful to briefly state their points and consider our formalism in those respects.
The basic representational principles may be stated followingly (cf. also Robinson 1970): i) There is one-to-one correspondence between leaves of the tree and primitive elements of the construction represented by the tree.
ii) There is one-to-one correspondence between nonterminal nodes of the tree and constructions of the text.
iii) Labellings on the nodes express the categories of primitive elements, constructions, and dependency relations. Firstly it implies that no unit can be a member of more than one construction.
Secondly it implies that every unit must be a member of at least one construction (except the text itself). But this is not the case. In a sentence like "John tried to swim" "John" is member of two constructions (Hudson calls this modifier-sharing) and this cannot be represented in a tree form. Accordingly in the sentence "Tom went to Paris and Hanna to London" "went" is a governor for two constructions (head-sharing). The Eurotra formalism has introduced a speacial notion of EMPTY ELEMENTS to handle these phenomena. These are shadow elements of their antecedents, i.e. the elements that participate in more than one construction. The empty elements in trees are leaves that do not correspond to anything at all in the text (the one-to-one correspondence is no longer valid).
There are some further problems. In some constructions there exist elements that are not dependent on anything in the clause.
In "Frankly, I do not care a bit" "frankly" does not seem to be dependent on any word in the clause. For these situations a notion of TRANSCONSTRUCTIONALS is introduced in the Eurotra formalism. These are handled in a way that makes them as if they were dependents in the construction they are related to intuitively. A special label, pseudodependency, is attached to them.
Such problems are of course existent also in Finnish. Especially the problem of modifier-sharing is common in rather simple clauses already. Different kinds of infinitive constructions are typical examples of the phenomenon. Clauses such as "Poika haluaa analysoida kivia" ("The boy wants to analyze stones") cannot be properly handled in our parser at present. Some new methods for handling these phenomena should be added either to the parser or at least in post parser analysis of sentences. At present the constructions of the parser are based only on the naively elegant one-to-one correspondence principle.
-118-
118
Proceedings of NODALIDA 1985
F i g. 1
1: a simplified description of the flow of parsing
As
Johnson & King & des Tombe point out, this is elegant but empirically wrong and must be augmented. There are two classes of problems to the basic representational theory.
. Norman Fraser, LondonDepartment of Computer Science. University CollegeM.Sc. thesisFraser, Norman 1985« A Word Grammar Parser. M.Sc. thesis, Department of Computer Science. University College, London.
* Dependency Theory: a Formalism and Some Observations. David G Hays, Readings in Automatic Language Processing. Hays, D.G.New YorkAmerican Elsevier Publishing Company40ParsingHays, David G. 1964* Dependency Theory: a Formalism and Some Observations. Language 40: 511 -525* ------------------1966. Parsing. In Hays, D.G. (ed.). Readings in Automatic Language Processing. American Elsevier Publishing Company, New York: 73 -82.
Programs for Language Analysis and Inference. Examples of Application. Peter Hellwig, GuildfordUniversity of SurreyComputing Unit and Linguistics and International Studies DepartmentHellwig, Peter 1985* Program System PLAIN. "Programs for Language Analysis and Inference. Examples of Application." Computing Unit and Linguistics and International Studies Department. University of Surrey, Guildford.
Syntax. Fragen -Lösungen -Alternativen. Hans Heringer, & Jttrgen, Strecker, & Bruno, Rainer Wimmer, Wilhelm Fink VerlagMunchenHeringer, Hans Jttrgen & Strecker, Bruno & Wimmer, Rainer 1980. Syntax. Fragen -Lösungen -Alternativen. Wilhelm Fink Verlag, Munchen.
Oberflächensyntax und Semantik. Linguistische Arbeiten 93« Max Niemeyer Verlag. Richard Hudson, Proceedings of the Xlllth International Congress of Linguists. 1984« Word Grammar. Basil Blackwellthe Xlllth International Congress of LinguistsTokyo; University College, LondonThe University of Chicago Press198089Constituency and Dependency. LinguisticsHudson, Richard 1976. Arguments for a Non-transformational Grammar. The University of Chicago Press, Chicago. -------------------1980a. Constituency and Dependency. Linguistics 13: 179 -198. -------------------1980b. Daughter-dependency Grammar. In Hans-Heinrich Lieb (ed.). Oberflächensyntax und Semantik. Linguistische Arbeiten 93« Max Niemeyer Verlag, Tubingen. -------------------1983* Word Grammar. In Hattori, Shiro & Inoue, Kazuko 1983 (ed.). Proceedings of the Xlllth International Congress of Linguists, Tokyo: 89 -101. -------------------1984« Word Grammar. Basil Blackwell, Oxford. -------------------1985* A Prolog Implementation of Word Grammar. Mimeo, October 85» University College, London.
A Multilingual System under Development. Rod & Johnson, King, Louis Tombe, Computational Linguistics. 11Johnson, Rod & King, Maghi & des Tombe, Louis 1985. A Multilingual System under Development. Computational Linguistics 11 : 155 -169.
Harri & Jappinen, Lehtola, & Aarno, Nelimarkka, & Esa, Matti Ylilammi, ^Morphological Analysis of Finnish: a Heuristic Approach. 26Helsinki University of Technology, Digital Systems LaboratoryJappinen, Harri & Lehtola, Aarno & Nelimarkka, Esa & Ylilammi, Matti 1983« ^Morphological Analysis of Finnish: a Heuristic Approach. Helsinki University of Technology, Digital Systems Laboratory, report B26.
-1985b. Parsing Finnish in Terms of Process Grammar. Karlsson, Computational Morphosyntax. Report on Research 1981 -1984* Publications of the Department of General Linguistics. No. 13« University of Helsinki. Karlsson 1985aFred. ----------------Karlsson, Fred 1985a (ed.) Computational Morphosyntax. Report on Research 1981 -1984* Publications of the Department of General Linguistics. No. 13« University of Helsinki. -------------------1985b. Parsing Finnish in Terms of Process Grammar. In Karlsson 1985a (ed.): 137 -176.
Automatische Analyse des Deutschen. Jurgen ; « Kunze, Abhängigkeitsgrammatik, Studia Grammatics XII. Berlin. -------------Akademie-VerlagEinfuhrung. In KunzeKunze, Jurgen 1975« Abhängigkeitsgrammatik. Studia Grammatics XII. Akademie-Verlag, Berlin. -----------------1982a (Hrg.). Automatische Analyse des Deutschen. Akademie-Verlag, Berlin. -----------------1982b. Einfuhrung. In Kunze 1982a: 17 -34.
Languagebased Environment for Natural Language Parsing. Aarno & Lehtola, Harri & Jappinen, Esa Nelimarkka, Proceedings of the Second Conference of the European Chapter of the Association for Computational Linguistics. the Second Conference of the European Chapter of the Association for Computational LinguisticsUSALehtola, Aarno & Jappinen, Harri & Nelimarkka, Esa 1985. Language- based Environment for Natural Language Parsing. Proceedings of the Second Conference of the European Chapter of the Association for Computational Linguistics, USA: 98 -106.
W J M Levelt, Applications in Linguistic Theory. Mouton, The HagueIILevelt, W.J.M. 1974. Formal Grammars in Linguistics and Psycholinguistics. Volume II: Applications in Linguistic Theory. Mouton, The Hague.
Semantics 2. John Lyons, Cambridge University PressCambridgeLyons, John 1977. Semantics 2. Cambridge University Press, Cambridge.
Syntax. P M Matthews, 119- 119 Proceedings of NODALIDA 1985Cambridge University PressCambridgeMatthews, P.M. 1981. Syntax. Cambridge University Press, Cambridge. -119- 119 Proceedings of NODALIDA 1985
Igor 1979-Studies in Dependency Syntax. Mel'cuk, Linguistics Exxranea, Studia. 2Karoma PublishersMel'cuk, Igor 1979-Studies in Dependency Syntax. Linguistics Exxranea, Studia 2. Karoma Publishers, Ann Arbor.
Semantics and Syntax; Parallels and Connections. J Miller, Cambridge University PressCambridgeMiller, J. 1985* Semantics and Syntax; Parallels and Connections. Cambridge University Press, Cambridge.
Augmented Dependency Grammar: a Simple Interface between the Grammar Rule and the Knowledge. Kazunori & Muraki, Ichiyama, & Shunji, Fukumochi, ; « Yasumoto, Nelimarkka, & Esa, Lehtola, & Aarno, Karri Jappinen, 1984a, Proceedings of the Second Conference of the European Chapter of the Association for Computational Linguistics. Shea, Tim 0.the Second Conference of the European Chapter of the Association for Computational LinguisticsUSA; UppsalaElsevier Science Publishers177ECAI-84: Advances in Artificial IntelligenceMuraki, Kazunori & Ichiyama, Shunji & Fukumochi, Yasumoto 1985* Augmented Dependency Grammar: a Simple Interface between the Grammar Rule and the Knowledge. Proceedings of the Second Conference of the European Chapter of the Association for Computational Linguistics, USA: 198 -204« Nelimarkka, Esa & Lehtola, Aarno & Jappinen, Karri 1984a. A Computational Model of Finnish Sentence Structure. In Anna Sågvall Hein (ed.). Föredrag vid De Nordiska D a talingvistik dagarna 1983, Uppsala: 169 -177« -------------------------------------------------------------------1984 b . Parsing an Inflectional Free Word Order Language with Two-way Finite Automata. In Shea, Tim 0. (ed.). ECAI-84: Advances in Artificial Intelligence. Elsevier Science Publishers: 167 - 176.
* A Dependency Base for Linguistic Description. M Platek, J Sgall, Sgall 1984Platek, M. & Sgall, J. 1984* A Dependency Base for Linguistic Description. In Sgall 1984 (ed.): 63 -98.
Dependency Structures and Transformational Rules. Jane J Robinson, Language. 46Robinson, Jane J. 1970. Dependency Structures and Transformational Rules. Language 46: 259 -285-
Petr Sgall, Contributions to Functional Syntax, Semantics and Language Comprehension. Linguistic & Literary Studies in Eastern Europe. AmsterdamJohn Benjamins16Sgall, Petr 1984 (ed.). Contributions to Functional Syntax, Semantics and Language Comprehension. Linguistic & Literary Studies in Eastern Europe vol. 16. John Benjamins, Amsterdam.
On the Validity of the Complement-adjunct Distinction in Valency Grammar. Harald Somers, Linguistics. 22Somers, Harald 1984» On the Validity of the Complement-adjunct Distinction in Valency Grammar. Linguistics 22: 507 -530.
The End of Phrase Structure as. Stanley Starosta, We Know It. Series A, Paper. 147-L.A.U.D.T.Starosta, Stanley 1985-The End of Phrase Structure as We Know It. Series A, Paper no. 147-L.A.U.D.T., Linguistic Agency of Duisburg.
Symposium on Grammars of Analysis and Synthesis and Their Representation in Computational Structures. R L Urutyan, S L Simonyan, Tallinn; 108 -109« Winograd, Terry 1983* Language as a Cognitive Process. Tiits, M.MassachusettsAddison WesleyI* Analysis of Equivalence in Language by Means of D-grammarsUrutyan, R.L. & Simonyan, S.L. 1983* Analysis of Equivalence in Language by Means of D-grammars. In Tiits, M. (ed.). Symposium on Grammars of Analysis and Synthesis and Their Representation in Computational Structures. Academy of Sciences of the Estonian S.S.R., Tallinn; 108 -109« Winograd, Terry 1983* Language as a Cognitive Process. Volume I, Syntax. Addison Wesley, Massachusetts. |
|
8,582,271 | An Integrated Approach to Heterogeneous Data for Information Extraction | The paper proposes an integrated framework for web personal information extraction, such as biographical information and occupation, and those kinds of information are necessary to further construct a social network (a kind of semantic web) for a person. As web data is heterogeneous in nature, most of IE systems, regardless of named entity recognition (NER) or relation detection and recognition (RDR) systems, fail to get reliably robust results. We propose a flexible framework, which can effectively complement stateof-the-art statistical IE systems with rule-based IE systems for web data, and achieves substantial improvement over other existing systems. In particular, in our current experiment, both the rule-based IE system, which is designed according to some web specific expression patterns, and the statistical IE systems, which are developed for some homogeneous corpora, are sensitive only to specific information types. Hence we argue that our system performance can be incrementally improved when new and effective IE systems are added into our framework. | [
13525347,
11664683,
29759924
] | An Integrated Approach to Heterogeneous Data for Information Extraction
Ying Chen chenying3176@gmail.com
Department of Chinese & Bilingual Studies
The Hong Kong Polytechnic University
Sophia Y M Lee sophiaym@gmail.com
Department of Chinese & Bilingual Studies
The Hong Kong Polytechnic University
Chu-Ren Huang churenhuang@gmail.com
Department of Chinese & Bilingual Studies
The Hong Kong Polytechnic University
An Integrated Approach to Heterogeneous Data for Information Extraction
relation extraction, information extraction
The paper proposes an integrated framework for web personal information extraction, such as biographical information and occupation, and those kinds of information are necessary to further construct a social network (a kind of semantic web) for a person. As web data is heterogeneous in nature, most of IE systems, regardless of named entity recognition (NER) or relation detection and recognition (RDR) systems, fail to get reliably robust results. We propose a flexible framework, which can effectively complement stateof-the-art statistical IE systems with rule-based IE systems for web data, and achieves substantial improvement over other existing systems. In particular, in our current experiment, both the rule-based IE system, which is designed according to some web specific expression patterns, and the statistical IE systems, which are developed for some homogeneous corpora, are sensitive only to specific information types. Hence we argue that our system performance can be incrementally improved when new and effective IE systems are added into our framework.
Introduction
Semantic web, which collects and formats different kinds of web knowledge, plays an important role in the development of a new generation of web. One important component of semantic web is to automatically extract different relations existing in web data. Information extraction (IE) can provide such a technology to solve this problem, particularly for a specific named entity. For example, Web People Search 1 (WePS) 2009 evaluation (Sekine & Artiles, 2009) tries to extract some personal information, and TREC Entity Track 2 plans to find some related information for a product.
Web IE is particularly challenging because web data is heterogeneous in nature. Complete or comprehensive IE information necessarily come from many different sources with different formats. For example, "affiliation" and "email" are so different in their own expressions so that they need different extraction approaches. Hence, a homogeneous IE model, regardless of statistical model or rule-based model, often cannot perform effectively for web IE. To overcome this problem, some previous systems (Culotta et al., 2004;Lan et al., 2009;Watanabe et al., 2009) have tried to combine different IE approaches, which are often homogeneous IE, to extract different types of information in web data. Nevertheless, few of them have explored how to effectively utilize or integrate different IE tools for web data.
In this paper, we propose a framework to integrate heterogeneous IE approaches for web IE. In this framework, we first segment web data according to the expression format. Similar to the genre categories -"formal text" and "informal text" -which were defined in Minkov et al. (2005), a text is either in "formal style" or "informal style." A "formal style" text obeys prescribed writing standards, i.e. a complete sentence usually with a subject and an object. On the contrary, "informal style" has few limitations on writing format and can mix various representation levels. In order to do so, we develop a novel algorithm to segment a webpage into fragments according to their expression format: formal style and informal style. This segmentation allows an existing IE system, which often was developed for a specific type of text, to be applied to its similar text fragments. For example, most statistical IE systems are developed for a news corpus, therefore it is better to apply them to formal-style fragments.
In addition, web data also have their ways of conveying certain information. For example, it is common that occupation and affiliation information is expressed in a homepage in the format of "Name, Position, Affiliation," such as "Anita Coleman, Assistant Professor, University of Arizona." As this kind of web-specific expression is often multi-lines, some existing IE patterns (Mann & Yarowsky, 2003;Mann, 2006;Rosenfeld & Feldman, 2006), which were limited to one sentence or were designed for formal style text, cannot be directly applied. To identify this web expression property, we develop web-specific patterns, which take into account of different kinds of information, such as webpage type information (i.e. homepage and biographical webpage), and text expression style (formal style and informal style). The experiment shows that those patterns could achieve high precision, which is very important for real applications.
Instead of presenting a totally new IE solution for web data, the goal of this paper aims at providing a flexible framework, which is able to effectively reuse and integrate different existing well-developed IE technologies for web data, and tries to collect some web-specific information to further help IE. We test our IE framework on a small scale data provided by the WePS 2009 evaluation 3 , one of whose tasks is to extract some personal information for a given person, and the experiment shows a promising result. Moreover, the comparatively-high precision of our system indicates the strong capability of our framework to integrate IE technologies. Finally, according to the personal information distribution in web data, we discuss the practical problems of IE systems for further improvement. In this paper, we concentrate only on IE for a focus person. The terms "attribute" and "relation" are interchangeable here.
Related work
Although IE is an old topic, it still poses a big challenge, especially for web data. In general, IE contains two key components: named-entity recognition (NER) and relation detection and recognition (RDR). In the personal IE case, NER, which extracts possible attribute value candidates, is a basic component, and RDR, which detects relations involving the focus person and given attribute value candidates and further selects valid attribute values for that person. To effectively tackle IE, both NER and RDR are required to have a good performance.
For NER, the naïve approach is rule-based, but its big disadvantage is the difficulty of ruledesign (Feldman, 2002) or rule-learning (Etzioni et al., 2005), which needs to handle various named-entity expressions. In recent years, statistical NER technology (Bikel et al., 1999;McCallum & Li, 2003) has been significantly improved through a series of evaluations, such as Automatic Content Extraction (ACE), Message Understanding Conference (MUC), and so on. However, because those NER technologies mainly focused on news documents, it was so dependent on text information, such as capitalization information and corpus type, that their performance dropped much when working on web data because those cues are rather noisy.
Besides, the adaptation of these NER systems to other kinds of corpus is not easy (Vilain et al., 2007). Although some NER systems, e.g. Minkov et al. (2005), attempted to do the adaptation work from news corpus to a non-news corpus, they still focused on a homogeneous corpus. Nevertheless, web data is heterogeneous in nature and it is impossible to know the source information of the documents. Therefore, it is very difficult to do NER adaptation for web data. In this paper, we explore a problem: how to effectively re-use the existing well-developed NER system for web data.
Compared to NER, RDR is still a comparatively hot and difficult topic. Although some statistical RDR systems have been developed, such as the systems participating ACE, they were usually designed only for homogeneous data as most of the existing statistical NER systems. Therefore, most of the previous web text mining systems adopted rule-based approach to extract information (Rosenfeld et al., 2004;Soderland, 1999). Similar to rule-based NER, the main problem of rule-based RDR is the difficulty in designing rules for all kinds of text. In recent years, some studies have been done to learn rules by a semi-supervised or totally unsupervised approach (Mann & Yarowsky, 2003;Mann, 2006;Rosenfeld & Feldman, 2006). These approaches only detect relations existing in a sentence, which is not enough for web data as some relations occur across sentences. Therefore, some patterns specific to web data needs to be learned. Overall, RDR is still on the stage of exploration now.
Most of the previous work has put much effort to develop a statistical IE system mainly focusing on a homogeneous corpus, in particular on news corpus, and their adaptation to web data is not an easy task. In this paper, we adopt another approach to solve web IE: how to effectively integrate those existing IE systems for web data. Meanwhile, we also explore some web-specific expression patterns in web personal information expressions.
Methodology
Our IE framework consists of two main components: preprocessing (webpage type detection and fragment segmentation) and personal information extraction. Preprocessing is very important in our framework as it allows our system to integrate different IE technologies for personal information extraction.
Preprocessing
Given a webpage, it is first categorized into three webpage types according to its relationship to the focus person, namely homepage, related webpage (a webpage mainly describes the focus person, such as biographical webpage), and others. It is then segmented into several fragments based on its text expression styles, which could be formal style fragment or informal style fragment. Figure 1 gives an example of the two fragments expressing the similar information. Formal style fragment gives information in a complete sentence in a conservative manner. For informal style fragment, it gives only keywords, usually with each piece of information in a separate line. Keywords are often capitalized.
Webpage type detection
The type of a webpage can sometimes provide important document-level information for IE. For example, all occurrences of "I" in a homepage refer to the focus person, and therefore all information in those sentences is about the focus person. It is not easy to completely catch the type information of a webpage because a webpage creator may put this information in various places. In this study, we apply some naïve rules only to a webpage title and its URL to detect its webpage type. The details of the rule are presented in Figure 2.
Text fragment segmentation
It is common that a webpage is often written in a mixture of different representations: formal style and informal style. For example, in a resume, the description of "objective" is often in formal style, while the "education experience" section is more likely to be in informal style. This noisy structure of webpage brings a lot of trouble to IE processing, no matter using rulebased or machine-learning (ML)-based approaches.
As mentioned, most of the current ML-based IE systems were trained from a corpus, whose expression format is similar to formal style, and cannot be effectively applied to informal style text. To reuse these well-developed ML-based IE systems, we first segment a webpage into fragments, and then apply a ML-based IE system only to those formal style fragments. Another advantage of our fragment segmentation is that a fragment is often a comparatively small unit, so it becomes much easier to design rules just focusing on one fragment.
There are two steps in our fragment segmentation. First, each line in a webpage is classified as one of the two classes -formal style or informal style -according to the percentage of tokens that begin with capitalization, as it is assumed that informal style text mainly consists of if title of a webpage contains a keyword for "homepage" web type = "homepage" elif title is the person name: web type = "homepage" elif title contains the person name web type = "related page" elif the URL of a webpage contains the last name or the first name of the personal name: web type = "homepage" else web type = "others" capitalized words. Second, continuous lines that share the same expression type are considered as a single fragment. For instance, consider a 10-line webpage as below: Line 1: *********** formal Line 6: *********** informal Line 2: *********** formal Line 7: *********** informal Line 3: *********** formal Line 8: *********** formal Line 4: *********** informal Line 9: *********** informal Line 5: *********** formal Line 10: ********** informal Each line is classified as either formal or informal style. Lines 1, 2 and 3 are linked as one fragment, which is followed by the other five fragments, i.e. Line 4, Line 5, Line 6-7, Line 8, and Line 9-10. There are six fragments in total.
Personal information extraction
As explained, the final IE result is decided by the total performances of both NER and RDR in this IE system. In this paper, we mainly focus on how to effectively combine and reuse different NER and RDR technologies for web data. Currently, we explore the two main categories of IE technologies: rule-based and ML-based. According to the combination ways of NER and RDR systems, we have three types of IE systems: a pure rule-based IE, a pure ML-based IE, and a hybrid IE (consisting of a ML-based NER and a rule-based RDR.) For rule-based IE, we develop rule-based NER and RDR systems especially for web data, and for ML-based IE, we adopt some existing ML-based NER and RDR systems, which were developed for news corpora, to handle web data. Our rule-based NER and RDR systems differ from previous rule-based IE with the limited scope of rules and the consideration of some webspecific information. All rules are designed limited to a fragment so that it can save a lot of effort to find a rule that can effectively work in a whole document. Meanwhile, we also take some web-specific information into account when designing rules. In the following section, we first briefly describe each NER and RDR system involving in our IE systems, and then give the three types of personal IE systems.
Rule-based NER:
Since informal style text is often noisy, almost no existing ML-based NER system is suitable for this kind of text. Here, we develop a rule-based NER system to extract the 16 kinds of attributes (listed in Table 1) used in WePS 2009 task. The rules are all tailor-made for formal and informal style text, and each attribute has different rules. First, a keyword set is collected for each attribute in question. For example, the "occupation" keyword set is taken from "Dictionary of Occupational Titles" (DOT), and the "organization" keyword set from General Architecture for Text Engineering (GATE) 4 . Then, some patterns, which use the keyword sets, are used to extract named entity expressions in question.
ML-based NER -BBN IdentiFinder:
Compared to informal-style fragment, formal-style fragment is well-written, and thus many well-developed ML-based NER systems can directly be applied to formal fragments and a fairly good performance can be achieved. In our experiment, we choose BBN IdentiFinder, which has a nice user interface and has a reasonable performance.
Rule-based RDR:
Given the named-entity expressions detected by a NER system, no matter it is rule-based or ML-based, their relationships with the focus person need to be detected by a RDR system. There are two kinds of patterns in our rule-based RDR system: keyword-based pattern, and web-specific pattern.
A keyword pattern is similar to previous patterns for text mining which identify relations within a sentence. First, a keyword set is collected for each target attribute. For example, the keyword "born" is chosen for the attributes of "date of birth" and "birth place." Then, a relation between a named entity expression and a focus person exists only when the following two requirements are satisfied.
1) The named entity expression belongs to the required types, which is defined by the target attribute. For example, for the attribute "occupation," the named entity must be an organization.
2) A keyword for the target relation must appear in that sentence. Besides the keyword-based patterns, we also design some patterns, which can search for personal information in question in the whole fragment so as to identify some web-specific expressions, i.e., the pattern in the informal style example in Figure 1. The "occupation" and "affiliation" attributes in that example are listed one by one in a fragment. Currently, we tried to catch some common attribute-listed patterns in webpages, especially in homepages, for personal information expression. (Hacioglu & Chen, 2005) is a complete IE system, which was developed for the ACE 2005 project, whose corpus is mainly a news collection. It includes three components: NER, co-reference and RDR. The EXERT system achieved comparative performances for all of the three components, especially in RDR, in the ACE 2005 evaluation.
ML-based IE -EXERT: EXERT
Personal IE systems.
According to different combination ways of the IE components given above, we develop three personal IE systems, and each one represents a type of integrated methods.
Rule-based IE:
The IE system consists of the rule-based NER and the rule-based RDR.
Hybrid IE: The IE system includes BBN IdentiFinder and the rule-based RDR. To achieve a high precision, this hybrid IE system is applied only to five attributes: date of birth, birth place, occupation, affiliation and school.
ML-based IE:
it is a direct application of the EXERT system to the WePS 2009 task. Because of the different relations, on which the WePS and ACE projects focus, we make a mapping to convert the ACE output format to the WePS personal attribute output format. For example, if the type of the mention of the focus person is "nominal" in the ACE output, the mention string is assigned as "occupation" in the WePS output. However, the EXERT system provides only six WePS attributes (relations): nationality, affiliation, school, relatives, occupation and other name. The sentences in formal style fragments containing a mention of the focus personal name.
ML-based
The sentences in formal style fragments containing a mention of the focus personal name.
The sentences in formal style fragments containing a mention of the focus personal name.
As mentioned, different IE components are often designed or developed for a specific type of text. To save time and achieve high accuracy, for each IE system and each webpage, considering the information gotten from preprocessing, we can choose only some specific fragments or sentences (as listed in Table 2) to run it. In our framework, first, all sentences that contains the various expressions of the focus personal names, i.e., "Anita S. Coleman," "Coleman, Anita" for "Anita Coleman," are detected using the rules in our rule-based NER. Then, each IE system runs separately, but all of them are limited to the text according to Table 2.
Experiment
In this paper, we make use of the WePS 2009 corpus to conduct the experiment. The Web People Search (WePS) evaluation provides a forum for a standard evaluation, which focuses on IE for personal named-entities in web data. There are two tasks in the WePS 2009 evaluation: clustering and attribute extraction (AE). The clustering task, which can also be called personal name disambiguation, groups those webpages according to whether the given personal name occurring in that webpage refers to the same person in reality. Attribute extraction, which can be considered as a special case of IE, extracts certain personal information for a focus person with the given personal name. In this paper, we focus only on the AE task. Although the WePS 2009 corpus is a small scale corpus comparing to the huge web data, but it is still able to show the web personal information distribution, and to prove how our IE framework works for web data.
Data Analysis
The WePS 2009 AE corpus includes 18 personal names in the training data and 30 personal names in the test data, and there are totally 3,468 documents (Sekine & Artiles, 2009). The data set for each personal name consists of about 100 webpages that contain the focus personal name. For each webpage, 16 kinds of attributes (referring to Table 1) could be extracted if existing.
First, we want to get some ideas about personal information distribution in web data, which can reflect the reasonability of our text-expression-format division (formal and information fragments). For each webpage in the WePS 2009 AE test data, for each golden-standard attribute value in that webpage, we search the webpage to catch the occurring frequency of this attribute value in a formal fragment and in an informal fragment, and show in Table 3. Notice, some attribute values may occur in both formal and information fragments.
In Table 3, we can see that the distribution varies depending on attribute types. In general, some simple attributes, whose values often can be caught by some fixed patterns, such as"email," "phone," and "fax," are more likely to be expressed in an informal style, whereas some complicated attributes, whose values often are extracted by a ML-based NER, such as "relatives," "affiliation," and "occupation," are more likely to occur in formal-style fragment. Nevertheless, some attributes, such as "date of birth," "school", do not show any preference in their expression ways in web data. Table 3 can also give some idea to improve personal IE system, as it indicates which attributes tend to be expressed in formal or informal text styles. In such a way, we will know where more effort is needed. For example, for "email," it is better to do more work on web-specific patterns, whereas for "occupation," a traditional ML-based NER and RDR may be a good choice.
Performances
All of our experiments run on the WePS 2009 AE test data, and the results are evaluated by the scoring provided by WePS 2009. We first run the pure rule-based IE system, and then add the hybrid IE system and the pure ML-based IE system one by one. The performances are presented in Table 4. In addition, we also give the performance of purely ML-based IE system. In Table 4, we notice that performance is consistently increasing when incorporating more IE technologies, and either rule-based IE or ML-based IE cannot work well for web data. The final combination system (Rule-based + Hybrid IE + ML-based IE), which can complement both rule-based and ML-based technologies, has achieved the best F score (18.89). It beats the top system in the WePS 2009 AE evaluation (Sekine & Artiles, 2009): 12.22. However, the low F score also indicates that personal information extraction from web data is still a big challenge. Therefore, more effort is needed in this respect.
In addition, comparing to most AE systems participating in the WePS 2009 AE evaluation, our system has achieved a higher precision. One possible cause of the phenomenon of high recall and low precision in these systems is the noisy NER systems they used to detect possible attribute value candidates. However, in our system, first we notice that the rule-based IE system has very high precision (36.31), which is much higher than the highest precision (30.4) reported in the WePS 2009 AE evaluation (Sekine & Artiles, 2009). This indicates our web-specific patterns are effective for web IE. Meanwhile, high precision is very important for real applications. Moreover, we also find that incorporating ML-based IE can improve recall while not hurting precision too much. This indicates that our integrated approach, which is based on the web type information and the web text expression style, can effectively combine the two different NER technologies (rule-based and ML-based) into one system for web data.
We also show the detailed performances (F scores) of the 16 kinds of WePS attributes in Table 5. As mentioned, both the hybrid IE system and the ML-based IE system affect the extraction of several attributes, so the performances of other attributes do not differ when incorporating those two IE systems. Therefore, we use "same" in Table 5 to indicate no change.
It is not surprising to notice, in Table 5, that the attributes of "email," "fax," and "phone" has achieved very good performances even with the rule-based IE, because they are almost fixed expressions. On the other hand, the attributes of "affiliation," "occupation," and "award" do not perform well even with the final combination system, because they can be expressed in various ways, which cannot be easily caught either by the rule-based system or by the MLbased IE system. Nevertheless, from Table 5, we can notice that integrating of different IE can complement the performances for some specific attributes, and further improve the overall performance. When we look at the performance variation closely, we found that the increase of the overall performance is largely due to the improvement of the precision, which indicates that our framework can compatibly integrate different IE technologies for web data, and therefore is flexible to add more IE components.
As shown in Table 5, we find that the performances of "birth place" and "date of birth" improve significantly after incorporating hybrid NER. However, in Table 3, we know that "birth" information appears almost evenly in both kinds of fragment, especially for "date of birth." This indicates the birth information in formal style is easier to be detected if the birth value candidates can be detected by a NER, whereas this kind of information in informal style is somewhat noisy. When adding the ML-based IE system, we find the performances of "affiliation" and "occupation" has improved, and this phenomenon is consistent with the information distribution (the information of "affiliation" and "occupation" is more likely in formal style text.) The performances of "nationality" and "relatives" do not change much although the ML-based IE should be able to extract this kind of information. This indicates this kind of information extraction needs more effort. Table 5: Performances for each attribute of our IE systems on the WePS 2009 test data ("same" means this extraction approach is the same as the previous one)
quality NER system for formal style text is one of our further work. However, it is still a big challenge to develop a high-quality NER just for informal style text in web data because of noisy surface cues. Moreover, the personal information distribution suggests that many complicated relations are expressed in formal style, and our fragment segmentation allows the use of existing RDR systems developed for formal style text. Therefore, we need to incorporate ML-based RDR, which can detect more kinds of attributes, into our system in the future. Finally, the question as to how to effectively extract and use web-specific information in personal information extraction, such as webpage type, web-specific patterns and so on, also needs more exploration.
Figure 2 :
2The algorithm for webpage type
Figure 1 :
1Examples of two kinds of fragments
Formal style fragment Anita Sundaram Coleman is an Assistant Professor in the School of Information Resources & Library Science at the University of Arizona,Tucson, which she joined in 2001.Informal style fragment
Anita Coleman
Assistant Professor
School of Information Resources & Library Science
1515 E. First St.
University of Arizona
Tucson, AZ 85719
Table 1 :
1List of attributes in the WePS 2009 AE taskAttribute names
date of birth
birth place other name occupation
affiliation
relatives
phone
fax
email
website
nationality
degree
major
school
mentor
award
3.2.1. IE components.
Table 2 :
2Focus fragments for each IE systemHomepage and related page
Others
Rule-based
Any fragment
Any fragments containing a mention of the
focus personal name.
Hybrid-based Any formal style fragment
Table 3 :
3The personal information distribution in the WePS 2009 test dataaffiliation
occupation birth place
date of birth
Formal-style
5,010
7,595
347
251
Informal-style
3,672
4,344
190
265
email
fax
phone
website
Formal-style
67
3
50
84
Informal-style
163
68
217
79
degree
major
school
mentor
Formal-style
340
165
478
551
Informal-style
519
135
397
164
other name
award
nationality
relatives
Formal-style
1,182
211
665
1,591
Informal-style
739
116
224
492
Table 4 :
4Performances of our IE systems on the WePS 2009 test dataprecision
recall
F score
Rule-based IE
36.31
9.15
14.62
ML-based IE
28.95
5.19
8.80
Rule-based + Hybrid IE
37.06
10.91
16.86
Rule-based + Hybrid IE + ML-based IE
31.90
13.42
18.89
http://nlp.uned.es/weps/
http://gate.ac.uk/
ConclusionIn this paper, we present a framework, which integrates heterogeneous IE approaches for web personal information extraction. The small-scale experiments presented in this paper shows our framework is able to flexibly combine the rule-based and ML-based NER for web personal IE, and the results are promising. In addition, heuristic patterns are developed to effectively catch the web information from heterogeneous sources. These patterns can also be added incrementally to improve the performance. Hence, we believe that the present framework is a very robust approach to heterogeneous information extraction. It is important to note that, compared with uniform statistical systems, our integrated system has very high precision yet lower recall. This is because each integrated information approach covers only a percentage of the heterogeneous texts, yet extracted exactly the right target information. This is one reason why we believe our integrated system can be incrementally improved when new patterns or technologies are incorporated. While improvements with a uniform model will come at a much higher cost.Nonetheless, the problem of web personal information extraction is far from being solved and more work is needed. We find that ML-based NER can improve the recall, so using a high-
An algorithm that learns what's in a name. D M Bikel, R L Schwartz, R M Weischedel, Machine Learning. Bikel, D. M., R. L. Schwartz, and R. M. Weischedel. 1999. An algorithm that learns what's in a name. Machine Learning.
Extracting social networks and contact information from email and the web. A Culotta, R Bekkerman, A Mccallum, Proceedings of CEAS-04. CEAS-04Culotta, A., R. Bekkerman, and A. McCallum. 2004. Extracting social networks and contact information from email and the web. In Proceedings of CEAS-04.
Unsupervised named-entity extraction from the Web: An experimental study. O Etzioni, M Cafarella, D Downey, A-M Shaked, S Soderland, D S Weld, A Yates, Artificial Intelligence. 165Etzioni, O., M. Cafarella, D. Downey, A-M Shaked, S. Soderland, D. S.Weld, and A. Yates. 2005. Unsupervised named-entity extraction from the Web: An experimental study. Artificial Intelligence, 165, 91-134.
Text mining. R Feldman, Handbook of Data Mining and Knowledge Discovery. Kloesgen W, Zytkow JCambridge, MAMIT PressFeldman, R. 2002. Text mining. In: Kloesgen W, Zytkow J (eds.) Handbook of Data Mining and Knowledge Discovery. MIT Press, Cambridge, MA.
K Hacioglu, Y Chen, University of Colorado (CU) ACE 2005 System, ACE-05 Evaluation Workshop. Gaithersburg, MDHacioglu, K. and Y. Chen. 2005. University of Colorado (CU) ACE 2005 System, ACE-05 Evaluation Workshop, NIST, Gaithersburg, MD.
Which Who are They? People Attribute Extraction and Disambiguation in Web Search Results. M Lan, Y Z Zhang, Y Lu, J Su, C L Tan, 2nd Web People Search Evaluation Workshop. 18th WWW ConferenceLan, M., Y. Z. Zhang, Y. Lu, J. Su, and C. L. Tan. 2009. Which Who are They? People Attribute Extraction and Disambiguation in Web Search Results. In 2nd Web People Search Evaluation Workshop (WePS 2009), 18th WWW Conference.
Unsupervised Personal Name Disambiguation. G Mann, D Yarowsky, Proceedings of CoNLL-2003. CoNLL-2003Mann, G. and D. Yarowsky. 2003. Unsupervised Personal Name Disambiguation. In Proceedings of CoNLL-2003.
Multi-Document Statistical Fact Extraction and Fusion. G Mann, Ph.D. ThesisMann, G. 2006. Multi-Document Statistical Fact Extraction and Fusion, Ph.D. Thesis.
Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. A Mccallum, W Li, Proceedings of CoNLL-2003. CoNLL-2003McCallum, A. and W. Li. 2003. Early results for named entity recognition with conditional random fields, feature induction and web-enhanced lexicons. In Proceedings of CoNLL- 2003.
Extracting Personal Names from Emails: Applying Named Entity Recognition to Informal Text. E Minkov, R Wang, W Cohen, HLT/EMNLPMinkov, E., R. Wang, and W. Cohen. 2005. Extracting Personal Names from Emails: Applying Named Entity Recognition to Informal Text. HLT/EMNLP.
URES : an Unsupervised Web Relation Extraction System. B Rosenfeld, R Feldman, Proceedings of COLING/ACL. COLING/ACLRosenfeld, B. and R. Feldman. 2006. URES : an Unsupervised Web Relation Extraction System. In Proceedings of COLING/ACL.
TEG: a hybrid approach to information extraction. B Rosenfeld, R Feldman, M Fresko, Proceedings of CIKM. CIKMRosenfeld, B., R. Feldman, and M. Fresko. 2004. TEG: a hybrid approach to information extraction. In Proceedings of CIKM.
WePS 2 Evaluation Campaign: Overview of the Web People Search Attribute Extraction Task. S Sekine, J Artiles, 2nd Web People Search Evaluation Workshop. 18th WWW ConferenceSekine, S. and J. Artiles. 2009. WePS 2 Evaluation Campaign: Overview of the Web People Search Attribute Extraction Task. In 2nd Web People Search Evaluation Workshop (WePS 2009), 18th WWW Conference, 2009.
Learning Information Extraction Rules for Semi-Structured and Free Text. S Soderland, Machine Learning. 34Soderland, S. 1999. Learning Information Extraction Rules for Semi-Structured and Free Text. Machine Learning, 34(1-3): 233-272.
Entity Extraction is a Boring Solved Problem -or is it. M Vilain, J Su, S Lubar, Proceedings of NAACL HLT. NAACL HLTVilain, M., J. Su, and S. Lubar. 2007. Entity Extraction is a Boring Solved Problem -or is it?. Proceedings of NAACL HLT.
A Two-Step Approach to Extracting Attributes for People on the Web in Web Search Results. K Watanabe, D Bollegala, Y Matsuo, M Ishizuka, 2nd Web People Search Evaluation Workshop. 18th WWW ConferenceWatanabe, K., D. Bollegala, Y. Matsuo, and M. Ishizuka. 2009. A Two-Step Approach to Extracting Attributes for People on the Web in Web Search Results. In 2nd Web People Search Evaluation Workshop (WePS 2009), 18th WWW Conference. |
21,391,989 | DeQue: A Lexicon of Complex Prepositions and Conjunctions in French DeQue: A Lexicon of Complex Prepositions and Conjunctions in French | We introduce DeQue, a lexicon covering French complex prepositions (CPRE) like à partir de (from) and complex conjunctions (CCONJ) like bien que (although). The lexicon includes fine-grained linguistic description based on empirical evidence. We describe the general characteristics of CPRE and CCONJ in French, with special focus on syntactic ambiguity. Then, we list the selection criteria used to build the lexicon and the corpus-based methodology employed to collect entries. Finally, we quantify the ambiguity of each construction by annotating around 100 sentences randomly taken from the FRWaC. In addition to its theoretical value, the resource has many potential practical applications. We intend to employ DeQue for treebank annotation and to train a dependency parser that takes complex constructions into account. | [
12408112,
2280293,
15631550
] | DeQue: A Lexicon of Complex Prepositions and Conjunctions in French DeQue: A Lexicon of Complex Prepositions and Conjunctions in French
May 2016
Carlos Ramisch
LIF UMR 7279
Aix Marseille Université
CNRS
Alexis Nasr
LIF UMR 7279
Aix Marseille Université
CNRS
André Valli
LIF UMR 7279
Aix Marseille Université
CNRS
José Deulofeu
LIF UMR 7279
Aix Marseille Université
CNRS
Carlos Ramisch
LIF UMR 7279
Aix Marseille Université
CNRS
Alexis Nasr
LIF UMR 7279
Aix Marseille Université
CNRS
André Valli
LIF UMR 7279
Aix Marseille Université
CNRS
José Deulofeu Deque
LIF UMR 7279
Aix Marseille Université
CNRS
Carlos Ramisch
LIF UMR 7279
Aix Marseille Université
CNRS
Alexis Nasr
LIF UMR 7279
Aix Marseille Université
CNRS
André Valli
LIF UMR 7279
Aix Marseille Université
CNRS
José Deulofeu
LIF UMR 7279
Aix Marseille Université
CNRS
DeQue: A Lexicon of Complex Prepositions and Conjunctions in French DeQue: A Lexicon of Complex Prepositions and Conjunctions in French
Lexicon of Complex Prepositions and Conjunctions in French. Language Resources and Evaluation Conference (LREC 2016)
Portoroz, SloveniaMay 2016HAL Id: hal-01464822 https://hal.archives-ouvertes.fr/hal-01464822 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L'archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d'enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. To cite this version:Compex prepositionscomplex conjunctionsmultiword expressionslexiconFrenchdependency parsing
We introduce DeQue, a lexicon covering French complex prepositions (CPRE) like à partir de (from) and complex conjunctions (CCONJ) like bien que (although). The lexicon includes fine-grained linguistic description based on empirical evidence. We describe the general characteristics of CPRE and CCONJ in French, with special focus on syntactic ambiguity. Then, we list the selection criteria used to build the lexicon and the corpus-based methodology employed to collect entries. Finally, we quantify the ambiguity of each construction by annotating around 100 sentences randomly taken from the FRWaC. In addition to its theoretical value, the resource has many potential practical applications. We intend to employ DeQue for treebank annotation and to train a dependency parser that takes complex constructions into account.
Introduction
Complex prepositions (CPRE) and complex conjunctions (CCONJ) are two types of function words that consist of more than one orthographic word (Piot, 1993). They can be considered as fixed multiword expressions that allow little or no variability. Examples in English include CCONJs even though, as well as and CPREs up to and in front of. Examples in French are shown in Table 1 along with their English (EN) meaningful and literal translations. CPRE and CCONJ constructions are quite frequent in French. Their linguistic description in the literature is generally limited to building comprehensive lists of such constructions (Sagot, 2010). Most authors assume that these constructions allow no or very little variability (inflection, insertion). Therefore, they would not require a very sophisticated description and representation in machine-readable lexicons and NLP systems, such as the ones required for verbs, for instance (Dubois and Dubois-Charlier, 2004). An aspect which is often neglected is the segmentation and structural ambiguity that arises when the words composing the complex function word co-occur by pure chance. Consider examples 1 and 2 containing the French CCONJ bien que. It is composed by the words bien (well) and que (that), but when they act as a CCONJ they mean although.
(1)
Je mange bien que je n'aie pas faim I eat although I am not hungry
(2) Je pense bien que je n'ai pas faim I think indeed that I am not hungry
In example 1, bien que is indeed a CCONJ that opposes the main clause (I eat) and the subordinate clause (I am not hungry). In example 2, however, bien que is not a CCONJ and the two words co-occur by chance. The adverb indeed modifies the verb of the main clause think, while the conjunction that introduces the clausal object. Since the word bien is a very common intensifier in French, such accidental co-occurrence cases are likely to occur with all verbs that accept que-clausal complements like think, say and forget.
From an NLP perspective, it is relevant to study these constructions in a parsing pipeline. Most of the time, we would be tempted to simplify the model and treat all of them as multiword tokens or words-with-spaces (Sag et al., 2002). However, accidental co-occurrence, like in example 2, creates ambiguities that are hard to solve at tokenisation time, specially given the simplicity of most automatic tokenisation approaches in French. A simplistic approach such as treating all occurrences of bien que as a single word with spaces inside would introduce an error for sentences like example 2. Conversely, ignoring it in example 1 would mean that both words are treated independently, not capturing the fact that the whole behaves like a conjunction. And what is more, these errors would be propagated to the following processing steps like POS tagging and parsing, certainly generating a wrong analysis.
The creation of DeQue takes place in the context of the development of a statistical dependency parser for French (Nasr et al., 2011). The need to quantify ambiguity has a practical consequence: unambiguous constructions can be included in the lexicon as frozen multiword tokens, while ambiguous ones need to be annotated and dealt with at parsing time.
One way of disambiguating ambiguous multiword units is to keep the tokens as individual lexical units during tokenisation and POS tagging, and then use special syntactic dependencies to indicate the presence of a CPRE or a CCONJ (McDonald et al., 2013;Candito and Constant, 2014;Green et al., 2013). In previous experiments, we demonstrated that this approach is superior to treating all units systematically as words with spaces (Nasr et al., 2015). However, this was only demonstrated for a small set of 8 CCONJs and 4 determiners in French. The present work substantially extends the coverage of the list of potentially ambiguous constructions that can be modelled using that approach.
In the remainder of this paper, we discuss the general properties and syntactic behaviour of prepositions and constructions in French ( § 2.). Then, we present the criteria ( § 3.) and methodology ( § 4.) used to construct the lexicon. Finally, we present the lexicon's structure and examples ( § 5.). We conclude by listing future extensions planned for this resource ( § 6.).
Construction Type
EN meaning EN literal à partir de CPRE starting from to leave of par rapport à CPRE with respect to for relation to bien que CCONJ although well that de sorte que CCONJ so that of sort that
Prepositions and Conjunctions
Before we can describe the criteria to select CPRE and CCONJ entries for DeQue, we must specify what we consider as simple prepositions (PRE) and conjunctions (CONJ). Indeed, criterion C1.3 below states that CPRE and CCONJ can be replaced by single-word PRE and CONJ. Therefore, we cannot apply it if we do not have a clear definition for these two categories. We distinguish PRE and CONJ according to the criteria below, based on the notion of active and passive valency.
In the framework of dependency syntax, the active valency of a word is defined as its set of acceptable syntactic dependants. For example, nouns can govern determiners, so the active valency of nouns includes determiners. The passive valency is defined as the set of acceptable syntactic governors. For example, adjectives can be governed by nouns, so nouns are in the passive valency of adjectives. Because some complex adverbs behave similarly as complex conjunctions, we also have to define the passive and active valency of adverbs. Preposition (PRE) Closed-class words (to, for, before) that relate two elements in a sentence, typically introducing verbal or nominal complements as the heads of prepositional phrases.
• Active valency: a PRE can govern noun phrases (à la maison, at home), infinitive verbs (sans pleurer, without crying), clauses introduced by conjunctions (pour que je vienne,lit. for that I come), etc. However, they can never govern bare clauses with inflected verbs not introduced by a conjunction (*pour je vienne, *for I come).
• Passive valency: a PRE cannot be the root of a dependency tree, it is necessarily governed by another word. If it is not governed, it is an idiomatic construction: en avant ! (move forward!), au secours ! (help!)
Conjunction (CONJ) Closed-class words (that, if, when) that relate two elements in a sentence, typically linking two full clauses. 1
• Active valency: differently from a PRE, a CONJ can govern a bare clause, but it can never govern another phrase introduced by a CONJ.
• Passive valency: a CONJ cannot be the root of a dependency tree, it is necessarily governed by another word. If it is not governed, it is an idiomatic construction: si on allait au cinéma ? (what if we went to the movies?). In other words, conjunctions cannot introduce single clauses, they can only link two clauses.
Adverbs (ADV)
Open-class words that generally modify verbs, adjectives or other adverbs.
• Active/passive valency: Adverbs induce a special relation between active and passive valency. An ADV cannot govern a CONJ when it is itself governed by another word (*je pense que peut-être qu'il vient (*I think that perhaps that he will come). In French, an ADV can govern a CONJ if the ADV is the root of the dependency tree (peut-être qu'elle viendra, lit. perhaps that she will come). This distinguishes PRE+que constructions (pour que je vienne, so that I come) from ADV+que constructions (peut-être que, perhaps that). When a governed adverb can govern a clause introduced by que (surtout que, alors que, bien que), we consider it as a CCONJ (see examples provided in criterion C1 below).
Complex Prepositions and Conjunctions
This paper presents DeQue, a new computational lexicon under development. DeQue lists and models the syntactic behaviour of around 280 CPREs headed by de and CCONJs headed by que in French. The goal of this resource is twofold:
• Provide a detailed and broad-coverage linguistic description of the possible syntactic analyses of each construction.
• Quantify the ambiguity of CPRE and CCONJ constructions based on corpus evidence.
Constructions in DeQue are CPREs headed by the preposition de (of ) and CCONJs headed by the conjunction que (that). These are undoubtedly the most frequent simple prepositions and conjunctions in French. Moreover, they present a very rich co-occurrence pattern, that is, their usages distribution is very heterogeneous. When used as prepositions and conjunctions, de and que are quite "promiscuous" and combine with many types of modifiers. For instance, the conjunction que can combine with adverbs (bien que, lit. well that), prepositional phrases (à condition que, lit. at condition that), noun phrases (le temps de, lit. the time of ), and so on. These modifiers often change or specify the meaning of the relation. For instance, while que expresses a quite general subordinating relation, bien que expresses opposition, si bien que expresses consequences, and so on. One of the challenges in building DeQue was the fact that de and que combine with several complements, including open-class words like nouns, verbs and adverbs. Therefore, it is impossible to guarantee that our lexicon is exhaustive. In addition to that, when we query the corpus for fine POS sequences (see Section 4.), many false positives are returned because of frequent open-class words that accidentally co-occur with de and que. We define CCONJ and CPRE for inclusion in DeQue based on three criteria. First, they are groups of words that function as prepositions or conjunctions as a whole. Second, they are potentially ambiguous and contain words that could co-occur by chance. Third, they present some degree of idiomaticity, realised through syntactic and semantic fixedness. Figure 1 summarizes the decision tree used to apply the criteria below in order. Criterion C1.1 guarantees that the construction is "complex", meaning that it is composed by more than one token. The last part of the criterion, that is, the fact that the last word is de or que, is only justified because, for the moment, we wanted to limit the scope of DeQue to the most frequent endogenous 2 CPRE and CCONJ. In the future, we intend to extend our lexicon to less frequent function words like CPREs headed by à (to) and CCONJs headed by où (where). Criterion C1.2 aims at excluding regular syntactic constructions such as simple prepositions followed by que. Most prepositions in French, like pour (for) and après (after), can have their complement introduced by que, which allows using a full clause as the complement of the preposition (see examples 3 and 4). Since this is the case for most prepositions, there is nothing special about the syntactic structure of this construction. Every time it appears, it can be modeled as a preposition that governs a que-clause. Moreover, prepositions always require some postponed complement, and there is no possible accidental cooccurrence here.
2 A group is endogenous if the POS of the whole, in our case, PRE and CONJ, can be found in one of the parts, in our case de and que.
(3)
Il travaille pour la collecte d'aliments He works for the food drive (4) Il travaille pour que les aliments soient collectés He works so that food is collected Criterion C1.3 helps excluding constructions that look like CPRE and CCONJ but actually are not. For instance, peutêtre que (lit. maybe that) looks like a CCONJ where que is modified by the adverb peut-être. One argument against this interpretation is the fact that it can appear in an isolated clause (example 5). That is, it does not respect the passive valency definition for CONJ described in Section 2.. Moreover, here the adverb is the syntactic head, inasmuch as que can be omitted (example 6). Many modal adverbs in French exhibit this behaviour, like certainement (certainly), probablement (probably), sans doute (undoubtedly).
(5) Peut-être que je viendrai ce soir Maybe I will come this evening (6) Peut-être je viendrai ce soir Maybe I will come this evening C2: Autonomous Lexical Units We require that the individual words composing a CPRE/CCONJ are autonomous lexical units. This means that they have their own distribution, cooccurring with other words in other contexts. Criterion C2 aims at excluding constructions that are surely not ambiguous. For instance, parce que (because) contains the word parce, which does never co-occur with a word other than que. This means that there is no possible accidental co-occurrence, and this sequence of tokens is never ambiguous. Tokenization as a word with spaces suffices to represent it in treebanks and parsers. Expresions that pass the tests for C1 and not C2 are not directly discarded, but listed in a separate lexicon of frozen constructions.
C3: Fixedness
We keep in DeQue only those constructions that are somehow fixed. We assume that fixedness is a good proxy for semantic idiomaticity, but offers more formal ways of being tested. The traditional definition of idiomaticity is based on semantic non-compositionality. In other words, the meaning of the parts does not add up to the meaning of the whole. Here, it would be hard (if not impossible) to apply this test since most of the time our entries only contain a single content word. We cite below some fixedness tests applied depending on the POS of the words preceding de and que. The restrictions below are observed with respect to free combinations of each POS forming the unit. We list below some tests used depending on the POS of the open-class word in the construction.
C3.1 If the unit includes a prepositional phrase, changing the preposition, or using the unit without the preposition, entails a change of meaning of the open-class word. For example, while the meaning of the noun centre is unchanged in the sequences au centre devers le centre de (in the centre of -toward the centre of ), this does not happen for moins (less) in à moins de -pour moins de (unless -for less than).
C3.2 If the unit includes a determiner, no change of determiner is possible without changing the meaning of the open-class word. For example, en raison de means roughly because, but en la raison de can only literally mean in the reason of.
C3.3 Restrictions are observed on the range of acceptable insertions and substitutions of the open-class word:
(a) Parenthetical or appositive modifiers are allowed: en fonction, évidemment, de la météo (depending, of course, on the weather).
(b)
If the open-class word is a noun, qualifying adjectives are prohibited, intensifying adjectives are allowed: à proportion exacte de (at the precise proportion of) *à proportion logarithmique de (*at the logarithmic proportion of).
(c) If the open-class word is an infinitive verb
Methodology
The first step in the creation of DeQue was the selection of our target lexical entries. In order to construct this initial lexicon, we design a methodology that combines linguistic expertise and corpora evidence. This methodology helped us to define precise criteria listed in Section 3. for inclusion of an entry in DeQue. Once the list of entries in the lexicon was stabilized, we model ambiguity using a similar process, combining linguistic expertise and corpora evidence. The corpus used in our queries is the French web-as-corpus (FRWaC), which contains a web dump of 1.613 billion words of French (Baroni et al., 2009). It was chosen mainly for its size, availability and because it presents a fairly decent balance between formal and informal writing. Additionally, it was automatically tagged with parts of speech (POS) using the TreeTagger.
Selection of Lexical Entries
The selection of lexical entries to include in DeQue was performed as follows:
1. We list potential de-CPRE and que-CCONJ based on introspection and existing general-purpose lexical resources like LEFFF (Sagot, 2010). For example, this initial list includes candidate conjunctions like si bien que (so that, lit. so well that) and bien sûr que (sure that). 2. For each candidate in this list, we manually annotate the fine POS sequence and global chunk tag of the elements that co-occur with de and que. For instance, si bien que has the fine POS sequence ADV-ADV-que, and the chunk tag GADV-que. 3 3. We query the FRWaC, retrieving all n-grams that have the fine POS sequences annotated in the previous step, and that occur more than 20 times. For instance, the search for ADV-ADV-que returned new entries like alors même que and si peu que. 4. We select, in this list, additional CPRE and CCONJ entries that we consider relevant according to the criteria described above. Some of the entries that were initially selected in step 1 were removed because they do not respect the inclusion criteria. For instance, bien sûr que was discarded because it does not behave as a conjunction and cannot be replaced by a single-word CONJ, not meeting criterion C1.3.
Some constructions selected as initial candidates turned out to be quite infrequent in the corpus (e.g. au moment que).
We decided to keep them in the lexicon because this is due to the nature and quite informal register of the FRWaC. The final list of selected constructions contains 228 CPRE and 49 CCONJ.
Ambiguity Assessment
For each target construction, we would like to estimate whether it is ambiguous. In that case, we would also like to know what proportion of uses correspond to CPRE and CCONJ readings with respect to accidental cooccurrence. Therefore, we also employ a heterogeneous methodology mixing linguistic expertise and corpus linguistics.
1. We build artificial sentences that exemplify the usage of each lexical entry. We number the examples, 1 for a use as a CPRE/CCONJ and 2 for other uses. For instance, examples 1 and 2 discussed in Section 1. are the sentences that exemplify the usages of the lexical entry bien que.
2. We select sentences in the FRWaC containing the word sequence of the lexical entry. as follows:
(a) We select any sentence in the FRWaC that contains exactly one occurrence of the target construction, including contractions like du (de+le) and qu' (que+vowel). (b) We keep only sentences that have more than 10 words (enough context is provided) and less than 20 words (annotation is faster). (c) We shuffle the order of sentences to favour variability.
(d) We highlight the target construction to facilitate subsequent annotation.
3. For each sentence, we annotate it as 1 (CPRE/CCONJ) or 2 (other uses). Sentences that have too many orthography and/or grammar errors are discarded. Sentences that are ambiguous and require extra context (previous/next sentence) are discarded as well. We annotate around 100 sentences per construction (or less, according to their frequency in the corpus). Each sentence was annotated by at least 2 experts, and conflicts were resolved during meetings.
4. Based on the insights from annotation, we describe the full syntactic structure of the construction by annotating the full dependency tree of the example sentences of each usage case. This includes comments about the most natural internal structure. For instance, it is reasonable to argue in favor of que acting as a syntactic head and bien being its dependent in bien que (Nasr et al., 2015).
The first step models the ambiguity of each construction in theory. Therefore, it is possible to know whether a construction is potentially ambiguous and requires some special treatment. Steps 2 and 3 quantify this ambiguity in practice, through empirical evidence. For example, in the case of bien que, we have annotated 99 sentences, from which 37.4% are CCONJ uses and 62.6% are other uses. The result of the last step details the syntactic ambiguity and suggests a representation for the target construction in a parser and/or treebank.
Resource Structure
The methodology outlined in the previous section results in a modular resource, composed of 4 parts, shown in Figure Tables 2 and 3. In addition to the entry's canonical form, it provides fine and chunk POS tags. The fine POS sequence corresponds to the POS tags of the individual words used in corpus searches. The tagset comes from the POS tags in the FRWaC corpus, which was automatically tagged by the TreeTagger. The third column shows the number of entries in DeQue that follow each fine POS pattern. We observe that prepositional phrases are the most productive complements of de-CPREs while adverbs seem to be the most common types of complements in que-CCONJs. The chunk POS is useful to group similar patterns and observe paradigms on a coarser scale. In addition to the fields shown in the tables, we also provide corpusrelated information such as the number of sentences of the FRWaC that contain the lexical entry's tokens. This is a raw number, and represents all uses regardless of whether the entry was really used as a CPRE/CCONJ or if it was an accidental co-occurrence.
Ambiguity Examples
We represent the ambiguity of lexical entries in two different ways, which account for the possibility and the likelihood of an entry to be ambiguous. Ambiguity examples are artificial sentences that we build up in order to illustrate all possible uses of an entry. They correspond to prototypical uses of the construction that help annotators understanding the ambiguity. Most of the time, they are adapted from annotated sentences described below.
Ambiguity Annotation Some constructions are likely to be employed in different uses, but others have a skewed distribution that makes one of the uses very rare. As explained above, in order to quantify this ambiguity, we selected a set of around 100 sentences per entry which were annotated using a simple distinction: 1 for CPRE/CCONJ and 2 for other uses. The examples below show some real sentences annotated for bien que. We note that "other uses" merges different phenomena. For example, the second sentence below contains the noun bien (goods), which is different from the first sentence where bien has its more usual role as adverb. Both are annotated as 2 because we consider that POS ambiguities will arise and be solved by an exterior process, not using information in DeQue.
2 Il semble bien que la profession préfère temporiser pour rafler une part des recettes provenant d'internet. 2 Je veux toutefois être encore de vos amis ; mais ne demandez plus un bien que j'ai promis. 1 Bien qu'elle l'ait inspiré, la religion védique est très différente de l'hindouisme d'aujourd'hui. 1 La pièce n'a besoin de rien, bien qu' il n'y ait rien là. 2 Je sais bien que votre coeur ne se détachera pas de lui-même.
Syntactic Description
We propose a full dependency tree for the ambiguity examples. This provides a way to distinguish CPRE and CCONJ constructions from accidental cooccurrence using special relation MORPH between the constituting elements (Nasr et al., 2015). When interpreted as a CPRE or CCONJ, the whole MWE acts as a single preposition or conjunction. Therefore, we argue that de and que should be the syntactic heads. Modifiers like noun phrases and adverbs often have regular syntactic structure, and their heads are governed by the preposition or conjunction. For instance, the syntactic structure of expressions bien que (although) and à condition de (conditioned to) in reading 1 is shown below:
bien que MORPH à condition de MORPH OBJ
Future Developments
In the future, we would like to extend this lexicon to other CPRE and CCONJ constructions. This includes, for instance, CCONJ headed by si (if ), quand (when) and où (where), CPRE headed by à (to) and en (in) and also complex determiners and pronouns. This lexicon can be very useful for parser development and adaptation to a given domain. For instance, if we want to build a very robust parser for literary texts, we would need to model all theoretically ambiguous constructions using MORPH links. On the other hand, a fast parser for speech transcriptions could safely ignore constructions that rarely co-occur by accident, setting a threshold on the proportion. For instance, all combinations that occur 90% of the times as complex function words will be simply concatenated as a single token. This helps the parser designers to make more informed decisions about the best moment to deal with these complex function words in the analysis pipeline.
We would like to build more fine-grained syntacticsemantic clusters for each construction type, depending on the distribution and fixedness of internal elements. For instance, locative relational nouns like north, centre, etc can build relational CPREs like au nord de (north of). They accept similar modifications and are very different from causative CPREs like en raison de and à cause de (both mean roughly because). We would like to obtain this information by studying the link between the corpus distribution and cooccurrence pattern of the internal elements and of the whole expression. This can be related either to the linguistic context of the construction itself, but also to the usage context, that is, genre and domain of the text.
Bibliographical References
Figure 2 :
2Structure of DeQue lexicon and example entry.
Table 1 :
1Examples of CPRE and CCONJ in French.
Figure 1: Decision tree corresponding to the application of criteria for lexical entries selection in DeQue.C1: Function as PRE/CONJC1.1 A CPRE/CCONJ inDeQue consists of groups of at least two words ending with de/que. C1.2 A CPRE/CCONJ in DeQue includes at least one open-class (or content) word, that is, one noun, adjective, adverb or verb. C1.3 A CPRE/CCONJ in DeQue commutes with a similar single-word PRE/CONJ keeping the sentence's acceptability and similar meaning.
2.Lexicon The main lexicon contains information about the
CPRE or CCONJ entries. Some examples of lexical entries
Fine POS + que
Chunk #Conj Example
ADV ADV
GADV
14 alors même que
ADV
GADV
20 ainsi que
DET NOM
GNOM
3
la preuve que
NOM
GNOM
3
faute que
PRE ADJ NOM
GPRE
5
à tel point que
PRE ADV ADV
GPRE
1
d'autant plus que
PRE ADV
GPRE
11 à moins que
PRE DET ADJ NOM GPRE
3
au même titre que
PRE DET NOM
GPRE
9
à l'idée que
PRE NOM
GPRE
12 à condition que
VPP
GVRB
2
attendu que
Total CCONJ
49
Table 2 :
2Example CCONJ patterns in DeQue main lexicon.Fine POS + de Chunk #Pre Example
ADV CSU
GCSU
1 plutôt que de
ADV
GADV 3 autour de
DET NOM
GNOM 1 le temps de
NOM
GNOM 4 faute de
PRE ADV
GPRE
7 à court de
PRE DET NOM GPRE 121 à l'abri de
PRE DET VINF GPRE
1 au sortir de
PRE NOM
GPRE 85 à base de
PRE VINF
GPRE
5 à compter de
Total CPRE
228
Table 3 :
3Example CPRE patterns in DeQue main lexicon.are shown in
The distinction between subordinating and coordinating conjunctions is not relevant for this work.
For fine POS sequences, we use the POS tagset of the FRWaC corpus. Chunk tags are: adverbial phrase (GADV), prepositional phrase (GPRE), noun phrase (GNOM), subordinate clause phrase (GCSU) and verb phrase (GVRB), suffixed by de or que.
The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation. M Baroni, S Bernardini, A Ferraresi, E Zanchetta, 43Baroni, M., Bernardini, S., Ferraresi, A., and Zanchetta, E. (2009). The wacky wide web: a collection of very large linguistically processed web-crawled corpora. Language resources and evaluation, 43(3):209-226.
Strategies for contiguous multiword expression analysis and dependency parsing. M Candito, M Constant, Proc. of the 52nd ACL. of the 52nd ACLBaltimore, MD, USA1Long Papers)Candito, M. and Constant, M. (2014). Strategies for con- tiguous multiword expression analysis and dependency parsing. In Proc. of the 52nd ACL (Volume 1: Long Pa- pers), pages 743-753, Baltimore, MD, USA, Jun. ACL.
Locutions en français. J Dubois, F Dubois-Charlier, Aix-en-Provencechez les auteursDubois, J. and Dubois-Charlier, F. (2004). Locutions en français. Aix-en-Provence: chez les auteurs.
Parsing models for identifying multiword expressions. S Green, M.-C De Marneffe, C D Manning, Computational Linguistics. 391Green, S., de Marneffe, M.-C., and Manning, C. D. (2013). Parsing models for identifying multiword expressions. Computational Linguistics, 39(1):195-227.
Universal dependency annotation for multilingual parsing. R T Mcdonald, J Nivre, Y Quirmbach-Brundage, Y Goldberg, D Das, K Ganchev, K B Hall, S Petrov, H Zhang, O Täckström, CiteseerMcDonald, R. T., Nivre, J., Quirmbach-Brundage, Y., Goldberg, Y., Das, D., Ganchev, K., Hall, K. B., Petrov, S., Zhang, H., Täckström, O., et al. (2013). Universal dependency annotation for multilingual parsing. In ACL (2), pages 92-97. Citeseer.
MACAON an NLP tool suite for processing word lattices. A Nasr, F Bechet, J.-F Rey, B Favre, J L Roux, Proceedings of the ACL 2011 System Demonstrations. the ACL 2011 System DemonstrationsPortland, OR, USANasr, A., Bechet, F., Rey, J.-F., Favre, B., and Roux, J. L. (2011). MACAON an NLP tool suite for processing word lattices. In Proceedings of the ACL 2011 System Demonstrations, pages 86-91, Portland, OR, USA, Jun. ACL.
Joint dependency parsing and multiword expression tokenization. A Nasr, C Ramisch, J Deulofeu, A Valli, Proceedings of ACL-IJCNLP 2015 (Long Papers). ACL-IJCNLP 2015 (Long Papers)Beijing, ChinaAssociation for Computational LinguisticsNasr, A., Ramisch, C., Deulofeu, J., and Valli, A. (2015). Joint dependency parsing and multiword expression to- kenization. In Proceedings of ACL-IJCNLP 2015 (Long Papers), pages 1116-1126, Beijing, China, July. Associ- ation for Computational Linguistics.
Les connecteurs du français. M Piot, Linguisticae investigationes. 171Piot, M. (1993). Les connecteurs du français. Linguisticae investigationes, 17(1):142-160.
Multiword expressions: A pain in the neck for nlp. I A Sag, T Baldwin, F Bond, A Copestake, D Flickinger, Computational Linguistics and Intelligent Text Processing. SpringerSag, I. A., Baldwin, T., Bond, F., Copestake, A., and Flickinger, D. (2002). Multiword expressions: A pain in the neck for nlp. In Computational Linguistics and Intelligent Text Processing, pages 1-15. Springer.
The lefff, a freely available and largecoverage morphological and syntactic lexicon for french. B Sagot, 7th international conference on Language Resources and Evaluation. Sagot, B. (2010). The lefff, a freely available and large- coverage morphological and syntactic lexicon for french. In 7th international conference on Language Resources and Evaluation (LREC 2010). |
125,681,496 | Segmentation multiple d'un flux de données textuelles pour la modélisation statistique du langage | Dans cet article, nous traitons du problème de la modélisation statistique du langage pour les langues peu dotées et sans segmentation entre les mots. Tandis que le manque de données textuelles a un impact sur la performance des modèles, les erreurs introduites par la segmentation automatique peuvent rendre ces données encore moins exploitables. Pour exploiter au mieux les données textuelles, nous proposons une méthode qui effectue des segmentations multiples sur le corpus d'apprentissage au lieu d'une segmentation unique. Cette méthode basée sur les automates d'état finis permet de retrouver les n-grammes non trouvés par la segmentation unique et de générer des nouveaux n-grammes pour l'apprentissage de modèle du langage. L'application de cette approche pour l'apprentissage des modèles de langage pour les systèmes de reconnaissance automatique de la parole en langue khmère et vietnamienne s'est montrée plus performante que la méthode par segmentation unique, à base de règles.Abstract In this article we deal with the problem of statistical language modelling for under-resourced language with a writing system without word boundary delimiters. While the lack of text resources has an impact on the performance of language models, the errors produced by the word segmentation makes those data less usable. To better exploit the text resources, we propose a method to make multiples segmentations on the training corpus instead of a unique segmentation. This method based on finite state machine allows obtaining the n-grams not found by the unique segmentation and generate new n-grams. We use this approach to train the language models for automatic speech recognition systems of Khmer and Vietnamese languages and it proves better performance than the unique segmentation method.Mots-clés : segmentation multiple, langue non segmentée, modélisation statistique du langage | [
16759312,
7375882
] | Segmentation multiple d'un flux de données textuelles pour la modélisation statistique du langage
2009
Sopheap Seng sopheap.seng@imag.fr
Laboratoire LIG/GETALP
GrenobleFrance
) Laboratoire MICA, CNRS/UMI-2954
Hanoi Vietnam
Laurent Besacier laurent.besacier@imag.fr
Laboratoire LIG/GETALP
GrenobleFrance
Brigitte Bigi brigitte.bigi@imag.fr
Laboratoire LIG/GETALP
GrenobleFrance
Eric Castelli eric.castelli@mica.edu.vn
) Laboratoire MICA, CNRS/UMI-2954
Hanoi Vietnam
S Seng
B. BigiL Besacier
E Castelli
Segmentation multiple d'un flux de données textuelles pour la modélisation statistique du langage
TALN 2009 -Session posters
Senlis2009multiple segmentationunsegmented languagestatistical language modeling
Dans cet article, nous traitons du problème de la modélisation statistique du langage pour les langues peu dotées et sans segmentation entre les mots. Tandis que le manque de données textuelles a un impact sur la performance des modèles, les erreurs introduites par la segmentation automatique peuvent rendre ces données encore moins exploitables. Pour exploiter au mieux les données textuelles, nous proposons une méthode qui effectue des segmentations multiples sur le corpus d'apprentissage au lieu d'une segmentation unique. Cette méthode basée sur les automates d'état finis permet de retrouver les n-grammes non trouvés par la segmentation unique et de générer des nouveaux n-grammes pour l'apprentissage de modèle du langage. L'application de cette approche pour l'apprentissage des modèles de langage pour les systèmes de reconnaissance automatique de la parole en langue khmère et vietnamienne s'est montrée plus performante que la méthode par segmentation unique, à base de règles.Abstract In this article we deal with the problem of statistical language modelling for under-resourced language with a writing system without word boundary delimiters. While the lack of text resources has an impact on the performance of language models, the errors produced by the word segmentation makes those data less usable. To better exploit the text resources, we propose a method to make multiples segmentations on the training corpus instead of a unique segmentation. This method based on finite state machine allows obtaining the n-grams not found by the unique segmentation and generate new n-grams. We use this approach to train the language models for automatic speech recognition systems of Khmer and Vietnamese languages and it proves better performance than the unique segmentation method.Mots-clés : segmentation multiple, langue non segmentée, modélisation statistique du langage
Introduction
Un modèle statistique du langage est une distribution de probabilités sur des mots ou suites de mots. Il permet de classer les mots ou les phrases selon leur probabilité d'apparition. Son objectif est d'assigner relativement une grande probabilité aux séquences de mots fréquentes, significatives, grammaticalement correctes et une faible probabilité aux séquences de mots rares, insensées ou grammaticalement incorrectes. Les modèles de langage sont utilisés dans des applications telles que la reconnaissance automatique de la parole, la reconnaissance automatique de l'écriture manuscrite, la correction orthographique, la traduction automatique et toute autre application introduisant une composante linguistique. La nature statistique des approches utilisées dans la modélisation du langage par n-grammes, nécessite une grande quantité de données textuelles pour obtenir une estimation précise des probabilités. Ces données ne sont pas disponibles en grande quantité pour les langues dites peu dotées et le manque de données d'apprentissage a un impact direct sur les performances des modèles de langage.
Tandis que le mot est généralement l'unité de base dans la modélisation statistique du langage, l'identification de mots dans un texte n'est pas une tâche simple même pour les langues qui séparent les mots par un caractère (un espace en général). Pour les langues dites non segmentées qui possèdent un système d'écriture sans séparation évidente entre les mots, les n-grammes de mots sont estimés à partir de corpus d'apprentissage segmentés en mots en utilisant des méthodes automatiques. La segmentation automatique n'est pas une tâche triviale et introduit des erreurs à cause des ambiguïtés de la langue naturelle et la présence de mots inconnus dans le texte à segmenter. Alors que le manque de données textuelles a un impact sur la performance des modèles de langage, les erreurs introduites par la segmentation automatique peuvent rendre ces données encore moins exploitables. Une alternative possible consiste à calculer les probabilités à partir d'unités sous-lexicales. Parmi les travaux existants qui utilisent des unités sous-lexicales pour la modélisation du langage, nous pouvons citer (Kurimo, 2006), (Abdillahi, 2006) et (Afify, 2006) qui utilisent les morphèmes respectivement pour la modélisation de l'arabe, du finnois, et du somalien. Pour une langue non-segmentée comme le japonais, le caractère (idéogramme) est utilisé dans (Denoual, 2006). Dans un travail précédent sur la reconnaissance automatique de la parole en langue khmère 1 , nous avons exploité les différentes unités lexicales et sous-lexicales (mot, syllabe et groupe de caractères 2 ) dans la modélisation du langage de cette langue peu dotée. Nous avons proposé des modèles de langage simples basés sur le mot, la syllabe, le groupe de caractères. Notre objectif était de comparer la performance de ces différentes unités et nous avons observé que le mot reste l'unité la plus performante.
Dans cet article, nous traitons du problème de la modélisation statistique du langage à base de mots pour les langues sans segmentation évidente entre les mots. Tandis que le manque de données textuelles a un impact sur la performance des modèles, les erreurs introduites par la segmentation automatique peuvent rendre ces données encore moins exploitables. Les n-1 Le khmer est la langue officielle du Cambodge 2 En khmer, un groupe de caractères ou un cluster de caractères (CC) est une séquence de caractères inséparables et possède une structure bien définie. La segmentation d'un texte khmer en CC est triviale et peut se faire à bases des règles. grammes de mots non trouvés dans le corpus d'apprentissage peuvent l'être à cause d'erreurs de segmentation mais aussi parce qu'une séquence de caractères peut avoir plusieurs segmentations correctes mais une seule segmentation a été considérée dans le corpus d'apprentissage. Dans un objectif consistant à mieux exploiter les données textuelles en utilisant les différentes vues sur les mêmes données, nous proposons une méthode qui effectue des segmentations multiples sur le corpus d'apprentissage au lieu d'une segmentation unique. Cette nouvelle méthode de segmentation basée sur des automates d'état finis permet de générer toutes les segmentations possibles à partir d'une séquence de caractères et nous pouvons ensuite en extraire les n-grammes. Elle permet de retrouver les n-grammes non trouvés par la segmentation unique et d'ajouter de nouveaux n-grammes dans le modèle de langage. L'application de cette approche pour l'apprentissage des modèles de langage pour les systèmes de reconnaissance automatique de la parole en langue khmère et vietnamienne s'est montrée plus performante que la méthode classique par segmentation unique. Dans les sections suivantes, nous allons d'abord faire un état de l'art sur les méthodes de segmentation automatique en mots avant de présenter notre méthode exploitant des segmentations multiples et les résultats d'expérimentations sur le khmer et le vietnamien.
Segmentation automatique en mots
Etat de l'art
La segmentation de textes est l'une des tâches fondamentales dans le traitement automatique des langues naturelles (TALN). Beaucoup d'applications de TALN nécessitent en entrée des textes segmentés en mots avant d'effectuer les autres traitements car le mot est considéré comme l'unité linguistique et sémantique de référence. Pour des langues comme le français et l'anglais, il est assez naturel de définir un mot comme une séquence de caractères séparés par des espaces. Cependant, pour les langues non segmentées, la segmentation en mots n'est pas un problème simple. A cause des ambiguïtés dans la langue naturelle, une séquence de caractères peut être segmentée de plusieurs façons. Cette ambiguïté ne pose pas vraiment de problème pour l'être humain, peut être à cause du fait qu'une segmentation incorrecte donne généralement une phrase incompréhensible. De plus, il peut exister des désaccords entre différentes personnes sur la segmentation d'une phrase donnée. Ce désaccord existe car il y a souvent différentes conventions de segmentation et la définition du mot dans une langue est souvent ambiguë.
La technique générale de segmentation en mots emploie un algorithme qui recherche dans un dictionnaire les mots correspondant à ceux du texte et qui, en cas d'ambiguïté, sélectionne celui qui optimise un paramètre dépendant de la stratégie choisie. Dans les stratégies les plus courantes, l'optimisation consiste à :
• maximiser la taille des mots, pris un par un de gauche à droite, avec retour arrière en cas d'échec (« plus longue chaîne d'abord » ou « longest matching »),
• minimiser le nombre de mots dans la phrase entière (« plus petit nombre de mots » ou « maximal matching »).
Ces techniques recourent intensivement à des dictionnaires, qu'il faut donc créer. Bien que cela puisse être fait automatiquement par apprentissage à partir d'un corpus, ces dictionnaires ont souvent été créés manuellement. Les travaux de recherche sur la segmentation automatique en mots de la langue chinoise et thaïe sont très actifs. Parmi les travaux qui utilisent ces techniques, nous pouvons citer (Li, 1998) pour le chinois et (Haruechaiyasak, 2008) pour le thaï. La performance de ces méthodes est acceptable en général mais elle dépend fortement de la taille et de la qualité des dictionnaires utilisés pour la segmentation. La performance diminue en présence de cas d'ambiguïté et de mots inconnus (voir tableau 1 pour les résultats de la segmentation des textes khmers).
Il existe des méthodes plus élaborées qui utilisent des méthodes statistiques et/ou passent par une phase d'apprentissage. Dans (Wu, 2003), pour une phrase chinoise à segmenter, un treillis de tous les mots possibles est construit en fonction d'un vocabulaire. Ensuite, des méthodes statistiques sont appliquées pour décoder le chemin le plus probable sur le treillis. Une méthode statistique et linguistique de segmentation en mots est aussi proposée et implémentée sur la langue thaïe (Meknavin, 1997). Dans cette méthode, le contexte des mots est analysé linguistiquement pour déterminer la segmentation la plus probable.
Les méthodes de l'état de l'art utilisent la combinaison de dictionnaires avec les statistiques pour obtenir un meilleur résultat. Cependant, les méthodes statistiques nécessitent de disposer d'un grand corpus de texte segmenté au préalable manuellement. Les méthodes statistiques et les méthodes d'apprentissage complexes ne sont pas appropriées dans notre contexte des langues peu dotées car les ressources nécessaires pour implémenter ces méthodes n'existent pas. Pour une langue considérée, nous cherchons des méthodes de segmentation performantes, rapides, faciles à implémenter et qui tirent, au mieux, bénéfice des ressources limitées existantes pour la langue.
Segmentation automatique de la langue khmère
Pour illustrer l'impact des mots hors-vocabulaire sur la performance des méthodes de segmentation automatique à base de dictionnaire, nous développons les outils de segmentation automatique de textes khmers en utilisant les deux critères d'optimisation : « plus longue chaîne d'abord » (longest matching) et « plus petit nombre de mots » (maximal matching). Notre corpus de test contient 1000 phrases. Après la segmentation manuelle, nous obtenons 31042 mots et un dictionnaire de 4875 mots. Nous enlevons ensuite les mots les moins fréquents du dictionnaire de départ pour créer des dictionnaires avec taux de mots horsvocabulaire croissants (de 5% à 50%) par rapport au corpus de test. Les performances de segmentation sont présentées dans le tableau 1. 3 Segmentation multiple pour la modélisation statistique du langage
Pourquoi une segmentation multiple ?
Contrairement à la segmentation unique décrite dans la section précédente qui recherche dans une séquence de caractères la meilleure segmentation selon un critère d'optimisation, notre approche par segmentations multiples cherche à générer, à partir d'une séquence de caractères, toutes les séquences des mots valides (basant sur un dictionnaire). C'est à partir de toutes ces séquences de mots que des n-grammes seront comptés pour l'apprentissage du modèle de langage. La segmentation 1 correspond bien à la segmentation unique de type « longest matching ». Dans le cas de la segmentation unique (segmentation 1), nous obtenons 4 tri-grammes. Si nous appliquons la segmentation multiple sur cette phrase, nous avons au total 9 tri-grammes. 5 nouveaux tri-grammes sont obtenus à partir des deux autres segmentations (segmentation 2 et 3). Il est à noter que nous ne comptons qu'une seule fois un tri-gramme qui se présente plusieurs fois dans les segmentations multiples d'un phrase.
Par rapport à la segmentation unique, la segmentation multiple permet d'obtenir plus de ngrammes. Nous pouvons diviser ces nouveaux n-grammes en trois différentes catégories :
1. des n-grammes de mots qui sont effectivement présents dans le corpus d'apprentissage d'origine, non segmenté, mais à cause d'erreurs introduites par la segmentation unique, ils ne sont pas retrouvés après la segmentation.
2. des n-grammes de mots qui sont effectivement présents dans le corpus d'apprentissage d'origine, non segmenté, mais comme une séquence de caractères peut avoir plusieurs segmentations correctes et qu'un seul choix est effectué lors de la segmentation unique, ils ne sont pas alors retrouvés après la segmentation.
3. des n-grammes de mots qui ne sont pas présents dans le corpus d'apprentissage même si la segmentation est parfaitement correcte. Dans ce cas, la segmentation multiple génère ces n-grammes parce qu'il est possible de segmenter entièrement une phrase en une séquence de mots valides (même si cela donne une phrase insensée) mais aussi parce que notre méthode de segmentation multiple permet également de générer localement les séquences de mots dans une phrase en marquant les parties restantes qui ne correspondent pas aux mots valides comme « mot inconnu ».
Les n-grammes de catégorie 1 et 2 sont des n-grammes potentiellement utiles pour la modélisation du langage car il s'agit de séquences de mots valides de la langue et ils sont effectivement présents dans le corpus d'apprentissage. Les n-grammes de catégorie 3 peuvent perturber la modélisation.
Nous développons un outil de segmentation multiple qui permet de sortir les N seg meilleures segmentations à partir d'une séquence de caractères donnée en entrée. Nous allons décrire dans la section suivante comment la segmentation multiple est implémentée.
Segmentation multiple utilisant les automates d'état fini
Notre outil de segmentation multiple est développé à l'aide d'automates d'état fini en utilisant la boîte à outils de AT&T FSM toolkit (Mohri, 2002). L'algorithme utilisé est inspiré des travaux sur la segmentation des mots arabes de (Zitouni, 2006) et (Lee, 2003). La segmentation multiple d'une séquence de caractères est faite à l'aide de la composition de trois automates. Le premier automate est un transducteur qui génère un treillis avec tous les segments possibles quand une séquence de caractères est donnée en entrée. Le deuxième automate peut être vu comme un dictionnaire sous forme de transducteur qui accepte les caractères et produit les séquences correspondant aux mots contenus dans le dictionnaire qui doit être disponible au début de l'algorithme. Le troisième automate est un modèle de langage qui peut assigner les scores à chaque séquence dans le treillis. Nous composons ces trois automates pour produire un treillis d'hypothèses de segmentation en mots, à partir d'une entrée en caractères (ou en syllabes pour le vietnamien). En parcourant ce treillis, nous pouvons générer les N seg meilleures segmentations pour une entrée donnée. Les N seg meilleures segmentations obtenues sont ensuite utilisées pour compter le nombre des n-grammes selon la méthode de comptage présentée dans figure 1.
Expérimentations
Les expérimentations sont menées sur deux langues peu dotées et non segmentées, le khmer et le vietnamien. Pour comparer les performances de la segmentation multiple et la segmentation unique à base de dictionnaire dans la modélisation statistique du langage, nous apprenons des modèles de langage trigrammes à partir des corpus d'apprentissage segmentés en mots en utilisant ces deux approches de segmentation. Pour observer l'impact du nombre de segmentations multiples sur la performance des modèles de langage, nous effectuons plusieurs tests en faisant la segmentation multiple sur les corpus d'apprentissage en faisant varier le nombre N seg de meilleures segmentations pour chaque phrase de 2 à 1000. A l'aide d'un corpus de développement, nous comparons la couverture en trigrammes (trigram hits) de ces modèles de langage et leur perplexité. Nous évaluons ensuite les performances de ces modèles de langage en les utilisant dans un système de reconnaissance automatique de la parole.
Expérimentations sur le khmer
Le khmer est la langue officielle du Cambodge parlée par plus de 15 millions de personnes dans le monde. Elle appartient au groupe des langues môn-khmères. Elle est classée comme une langue peu dotée car les ressources linguistiques et les services pour le traitement automatique de la langue ne sont pas encore bien développés. Au niveau de l'écriture, le khmer est écrit sans espaces entre les mots.
Notre corpus d'apprentissage de la langue khmère contient environ un demi million de phrases de type news. Un dictionnaire de 20k mots extraits du dictionnaire Chuon Nath de l'Institut Bouddhique du Cambodge est utilisé dans cette expérimentation. La segmentation unique à base de ce dictionnaire avec le critère d'optimisation « longest matching » donne un corpus de 15 millions de mots. Cinq autres corpus sont obtenus en effectuant les segmentations multiples avec le nombre de N seg meilleures segmentations qui varie de 2 à 1000. Il est à noter que la segmentation multiple utilise le même dictionnaire que la segmentation unique. Le comptage des n-grammes est effectué sur ces corpus et les modèles de langage n-gramme sont ensuite appris en utilisant ce même dictionnaire de 20k mots.
Un corpus de développement (dev) de 370 phrases (11k mots après la segmentation unique) est utilisé pour évaluer la couverture en trigrammes (trigram hits) et la perplexité des modèles de langage du khmer. Nous présentons dans le tableau 2 le nombre de trigrammes dans les modèles de langage, la couverture en trigrammes de ces modèles, la perplexité et la performance du système de reconnaissance automatique de la parole en langue khmère (sur un corpus de test constitué de 160 phrases de type news et dont les transcriptions sont différentes de l'ensemble de dev) qui utilise ces modèles dans le décodage. Les détails sur le système de reconnaissance automatique en langue khmère (décodeur, modèle acoustique) sont donnés dans .
Les modèles de langage issus des différentes segmentations
Expérimentations sur le vietnamien
Le vietnamien est la langue officielle du Vietnam. Elle est parlée par environ 70 millions de personnes dans le monde. Son origine est toujours sujette à débat parmi les linguistes. Il est cependant généralement admis qu'elle a des racines communes et fortes avec le môn-khmer qui fait partie de la branche austro asiatique. L'orthographe est latine depuis le XVIIè siècle, avec des caractères accentués pour les tons. Le vietnamien est écrit avec les espaces entre les syllabes mais ces espaces ne marquent pas les frontières entre les mots dans une phrase car un mot peut se composer d'une ou plusieurs syllabes. La figure 2 donne un exemple d'une phrase de la langue vietnamienne.
Figure 2 : Exemple d'une phrase vietnamienne
Le corpus d'apprentissage du vietnamien contient 3 millions de phrases soit plus de 56 millions de syllabes. Un dictionnaire de 30k mots extraits à partir d'un dictionnaire bilingue Vietnamien-Français est utilisé dans cette expérimentation. Après la segmentation unique automatique à base de ce dictionnaire avec le critère d'optimisation « longest matching », nous obtenons un corpus de 46 millions de mots. Les segmentations multiples sont effectuées avec les nombres de N seg variant de 2 à 1000. Les modèles de langage de trigrammes sont ensuite appris à partir de ces corpus en utilisant un dictionnaire de 30k mots (cf expérimentation sur le khmer).
Un corpus de développement (dev) de 1000 phrases (44k mots après la segmentation unique) est utilisé pour évaluer la couverture en trigramme et la perplexité des modèles de langage. Les performances de reconnaissance de la parole sont estimées sur un corpus de test de 400 phrases de type news (dont les transcriptions sont différentes de l'ensemble de dev). Les détails sur le système de reconnaissance automatique en langue vietnamienne sont donnés dans . Les résultats des expérimentations sur le vietnamien sont dans le tableau 3.
Les modèles issus des différentes segmentations
Discussion
A travers les résultats d'expérimentations sur le khmer et le vietnamien, nous pouvons constater que l'approche par segmentations multiples permet de générer des nouveaux trigrammes par rapport à la segmentation unique, quand le nombre de N seg meilleures segmentations est augmenté Cette augmentation de nombre de trigrammes dans le model du langage améliore la couverture en trigrammes et la perplexité. Cette amélioration montre que les nouveaux trigrammes générés par la segmentation multiple sont pertinents pour la modélisation statistique du langage. Dans le cas du khmer, la meilleur taux d'erreurs du système de reconnaissance automatique de la parole est obtenue avec le model du langage M_10 et la performance drops si nous continuons à augmenter le nombre de N seg meilleures segmentations. Cela montre qu'à partir d'un certain niveau de segmentation, quand on augmente encore N seg , on ajoute beaucoup de mauvais trigrammes et cela perturbe la bonne répartition des probabilités dans le modèle du langage. Ce phénomène peut être observé clairement dans le cas de la langue vietnamienne : la couverture en trigramme n'augmente que de 0,2% quand on augmente le nombre de N seg meilleures segmentations de 50 à 1000 mais on ajoute plus de 2,5 millions de nouveaux trigrammes dans le modèle. La meilleur taux d'erreurs du système de reconnaissance automatique de la parole dans le cas de vietnamien est obtenue avec le nombre de segmentation N seg = 2. Avec une analyse plus détaillée sur le corpus d'apprentissage vietnamien, nous avons constaté que près de 80% des mots dans le corpus sont les mots monosyllabiques et seulement 20% qui sont multi-syllabiques. Cela veut dire qu'il n'y pas beaucoup de bonne segmentations possibles que l'on peut générer comparant à la langue khmère.
Conclusion
Nous proposons dans cet article une approche qui consiste à effectuer des segmentations multiples sur le corpus d'apprentissage pour la modélisation statistique du langage dans le contexte des langues peut dotées et non segmentées. Cette approche permet de retrouver les ngrammes non trouvés par la segmentation unique et de générer de nouveaux n-grammes dans les modèles. L'application de cette méthode pour l'apprentissage des modèles de langage pour les systèmes de reconnaissance automatique de la parole en langue khmère et vietnamienne s'est montrée plus performante (en perplexité et en taux d'erreur de reconnaissance) que la méthode par segmentation unique.
Figure 1 :
1Exemple de la segmentation multiple d'une phrase en khmer Figure 1 montre un exemple de la segmentation multiple d'une phrase en khmer. Nous montrons trois segmentations possibles d'une séquence de caractères en khmer.
Nous observons que, dans le cas d'absence de mots hors vocabulaire, la performance est autour de 92% pour les deux méthodes mais la performance chute à 69% et 72% quand il y a 50% des mots hors vocabulaire dans le corpus à segmenter. Pour les langues peu dotées, il est difficile d'obtenir un dictionnaire avec un taux de mots hors-vocabulaire faible. Dans ce cas, on risque donc d'atteindre une mauvaise performance de segmentation automatique sur le corpus d'apprentissage et la performance du modèle du langage appris à partir de ce corpus mal segmenté sera alors mauvaise.Performance de la segmentation (%)
Taux des mots hors
vocabulaire
Maximal Matching
Longest Matching
0%
91,6
91,7
5%
90,1
90,2
10%
90,2
90,3
20%
86,3
86,9
30%
82,6
83,5
40%
75,7
77,2
50%
68,8
72,4
Table 1 : Taux des mots corrects pour deux méthodes de segmentation
à base de dictionnaire en fonction du taux de mots hors-vocabulaire
Taux d'erreur de Reco sur le testM_Unique
M_2
M_5
M_10
M_50
M_100 M_500
M_1000
Nombre de trigrammes dans le
modèle de langage (million)
20.32
24,06
28,92
32,82
34,2
34,9
35.83
36.8
Nombre de trigram hits sur le dev
15901
16190 16384 16458 16547 16569 16593
16614
% de trigram hits sur le dev
47,7%
48,6%
49,2%
49,4%
49,7%
49,7%
49,8%
49,9%
Perplexité sur le dev
118,9
118,1
125,9
129
133,4
134,8
136,9
137,6
36,5%
35,5%
36%
36,1%
36,1%
36,2%
36,5%
36,5%
Table 3 :
3Les résultats d'expérimentation sur la langue vietnamienne
Automatic transcription of Somali language. Interspeech'06. N Abdillahi, Pittsburgh, PAAbdillahi N. et al. (2006). Automatic transcription of Somali language. Interspeech'06. 289- 292. Pittsburgh, PA
On the use of morphological analysis for dialectal Arabic Speech Recognition. Interspeech'06. M Afify, Pittsburgh, PAAfify M. et al. (2006) On the use of morphological analysis for dialectal Arabic Speech Recognition. Interspeech'06, 277-280. Pittsburgh, PA
The character as an appropriate unit of processing for nonsegmenting languages. E Denoual, S Lepage Y ; Kongyoung, M N Et Dailey, Proceedings of ECTI-CON. ECTI-CONTokyo Japan Haruechaiyasak C; ThailandNLP Annual MeetingDenoual E., Lepage Y. (2006). The character as an appropriate unit of processing for non- segmenting languages. NLP Annual Meeting. 731-734, Tokyo Japan Haruechaiyasak C., Kongyoung S., et Dailey M.N. (2008). A Comparative Study on Thai Word Segmentation Approaches. In Proceedings of ECTI-CON. 125-128. Thailand
Unsupervised segmentation of words into morphemes -Morpho Challenge. M Kurimo, Application to Automatic Speech Recognition. Interspeech'06. Kurimo M. et al. (2006). Unsupervised segmentation of words into morphemes -Morpho Challenge 2005: Application to Automatic Speech Recognition. Interspeech'06. 1021-1024.
. S Seng, L Besacier, B Bigi, E Castelli, S. Seng, L. Besacier, B. Bigi et E. Castelli
V B Le, L Besacier, S Seng, B Bigi, N D Do T, Recent Advances in Automatic Speech Recognition for Vietnamese. International Workshop on Spoken Languages Technologies for Under-Ressourced Languages. SLTU'08. Hanoi VietnamLe V.B., Besacier L., Seng S., Bigi B., DO T.N.D. (2008). Recent Advances in Automatic Speech Recognition for Vietnamese. International Workshop on Spoken Languages Technologies for Under-Ressourced Languages. SLTU'08 Hanoi Vietnam
Language model based arabic word segmentation. Y Lee, K Papineni, S Roukos, O Emam, H Et Hassan, Proceedings of the 41st Annual Meeting on Association For Computational Linguistics. the 41st Annual Meeting on Association For Computational LinguisticsSapporo. Japan1Lee, Y., Papineni, K., Roukos, S., Emam, O., et Hassan, H. (2003). Language model based arabic word segmentation. In Proceedings of the 41st Annual Meeting on Association For Computational Linguistics -Volume 1 399-406. Sapporo. Japan.
Chinese word segmentation. H Li, B Yuan, PACLIC-12Proceedings of the 12th Paci Asia Conference on Language, Information and Computation. the 12th Paci Asia Conference on Language, Information and ComputationSingaporeLi H., Yuan B. (1998). Chinese word segmentation. Proceedings of the 12th Paci Asia Conference on Language, Information and Computation. PACLIC-12. Singapore
Feature-based Thai Word Segmentation. NLPRS'97. S Meknavin, P Charoenpornsawat, B Kijsirikul, Phuket. ThailandMeknavin S., Charoenpornsawat P., Kijsirikul B. (1997). Feature-based Thai Word Segmentation. NLPRS'97. Phuket. Thailand
Weighted Finite-State Transducers in Speech Recognition. M Mohri, F Pereira, M Et Riley, Computer Speech and Language. 161Mohri M., Pereira F., et Riley M. (2002). Weighted Finite-State Transducers in Speech Recognition. Computer Speech and Language. 16(1) 69-88
Which Units for Acoustic and Language Modelling for Khmer Automatic Speech Recognition?. S Seng, S Sam, V B Le, L Besacier, B Et Bigi, Chinese word segmentation in MSR-NLP, SIGHAN Workshop on Chinese Language Processing. Sapporo. JapanHanoi Vietnam Wu ASLTU'08Seng S., Sam S., Le V. B., Besacier L. et Bigi B. (2008). Which Units for Acoustic and Language Modelling for Khmer Automatic Speech Recognition? SLTU'08. 33-38, Hanoi Vietnam Wu A. (2003) Chinese word segmentation in MSR-NLP, SIGHAN Workshop on Chinese Language Processing. Sapporo. Japan
Finite state based Arabic word segmentation. ArabTEXtest for Ali Farghaly. I Zitouni, CSLI PublicationZitouni I. (2006). Finite state based Arabic word segmentation. ArabTEXtest for Ali Farghaly. CSLI Publication. |
184,864,073 | [] | Comment mesurer la couverture d'une ressource terminologique pour un corpus ?
Dourdan, 6-10 juin 2005
Goritsa Ninova
LIPN UMR 7030
Université Paris
13 & CNRS 99, av. J.-B93430Clément, Villetaneuse
Adeline Nazarenko
LIPN UMR 7030
Université Paris
13 & CNRS 99, av. J.-B93430Clément, Villetaneuse
Hamon Thierry
LIPN UMR 7030
Université Paris
13 & CNRS 99, av. J.-B93430Clément, Villetaneuse
Sylvie Szulman
LIPN UMR 7030
Université Paris
13 & CNRS 99, av. J.-B93430Clément, Villetaneuse
Comment mesurer la couverture d'une ressource terminologique pour un corpus ?
TALN 2005
Dourdan, 6-10 juin 2005Mots-clés : couverture lexicaleterminologiestatistique lexicale Keywords : lexical coverageterminologylexical statistics
Cet article propose une définition formelle de la notion de couverture lexicale. Celleci repose sur un ensemble de quatre métriques qui donnent une vue globale de l'adéquation d'une ressource lexicale à un corpus et permettent ainsi de guider le choix d'une ressource en fonction d'un corpus donné. Les métriques proposées sont testées dans le contexte de l'analyse de corpus spécialisés en génomique : 5 terminologies différentes sont confrontées à 4 corpus. La combinaison des valeurs obtenues permet de discerner différents types de relations entre ressources et corpus.Abstract This paper proposes a formal definition of the notion of lexical coverage. Thisdefinition is based on four metrics that give a global view over a lexical resource to corpus relationship, thus helping the choice of a relevant resource with respect to a given corpus. These metrics have been experimented in the context of specialised corpus analysis in genomics. 5 terminologies have been confronted to 4 different corpora. The combination of resulting figures reflects various types of corpus vs . resource relationships.
Introduction
On parle couramment de « couverture lexicale » sans définir clairement ce qu'on entend par là. Différents auteurs mettent sous ce terme différentes notions et mesures. Le problème est d'autant plus complexe que les ressources utilisées comportent souvent des expressions polylexicales dont la projection en corpus peut se faire de différentes manières. Le présent article propose de définir un ensemble de métriques pour cerner cette notion de couverture dans le cas général d'une ressource constituée d'une liste de termes mono-et polylexicaux. Ces mesures sont testées pour différents couples ressource/corpus. Les premiers résultats obtenus sont encourageants. Ils montrent qu'on peut en effet documenter le comportement d'une ressource pour un corpus donné en préalable à tout traitement, et ainsi guider le choix de la ressource.
Après avoir souligné les enjeux de cette problématique et les questions qu'elle soulève (section 2), nous présentons dans la section 3 un ensemble de métriques. Celles-ci sont exploitées dans la perspective du traitement automatique de corpus de génomique. Les résultats de ces expériences sont présentés et discutés dans la section 4 de cet article.
Problématique
Enjeux
Le traitement de corpus spécialisé fait appel à des ressources sémantiques qu'on appelle généralement spécialisées parce qu'elles décrivent un domaine particulier d'activité. Ces ressources peuvent être de différents types selon les traitements envisagés, mais elles doivent comporter une dimension lexicale dès lors qu'elles sont destinées à l'analyse et l'interprétation de données textuelles.
Les ontologies du web sémantique doivent ainsi être ancrées lexicalement (avec des items lexicaux associés aux noeuds de l'ontologie) si elles doivent servir à indexer des textes. Les techniques d'accès au contenu des documents textuels sont diverses (extraction d'information, question-réponse, outils de navigation ou de résumé) mais elles reposent toutes sur une analyse sémantique partielle des documents, qui implique la reconnaissance de certains éléments du discours (entités nommées et termes du domaine, notamment), leur typage sémantique et leur mise en relation (Nazarenko, 2005). De ce fait, ces techniques reposent également sur des lexiques, terminologies ou thesaurus spécialisés pour identifier le vocabulaire de spécialité. Les catégories sémantiques et les relations lexicales sont utilisées (quand elles existent) pour désambiguïser les textes et en guider l'interprétation.
Dès lors que les applications de traitement automatique des langues (TAL), y compris au niveau sémantique, sont de plus en plus guidées par le lexique, la question du choix des ressources à exploiter prend de l'importance. La situation relève souvent à la fois de la pléthore et de la pénurie. D'un côté, il existe de nombreuses ressources terminologiques, surtout dans des domaines comme la biologie ou la médecine où l'effort d'organisation des connaissances est ancien 1 . Mais d'un autre côté, les « bonnes ressources » sont rares : le degré de spécialisation ou le point de vue représenté par la ressource est généralement différent de celui du texte que l'on cherche à analyser. Ce constat a été fait par (Charlet et al., 1996), toujours dans le domaine de la médecine pourtant reconnu pour la richesse de ses bases de connaissances. Dans la pratique, comme on ne peut ni se passer de ressource, ni en reconstruire de nouvelles pour chaque nouvelle application, on fait souvent avec ce qu'on a. Dans certains cas, on peut spécialiser la ressource et l'adapter en fonction du domaine et de la tâche visés (problématique de l'adaptation lexicale ou « lexical tuning » (Basili et al., 1998)) mais cela suppose néanmoins une ressource initiale.
Une question se pose alors : parmi l'ensemble des ressources qui paraissent recouvrir en partie et a priori le domaine du corpus à traiter, laquelle ou lesquelles choisir et sur quels critères ? Cette question est d'autant plus importante qu'on doit souvent limiter le nombre de ces ressources pour réduire l'inévitable travail de préparation des données et pour éviter les problèmes d'incohérence. Il est en général trop coûteux d'exploiter en parallèle différentes ressources pour les tester en les comparant au regard de l'application visée. Les experts du domaine ne sont pas toujours d'un grand secours non plus. Même s'ils sont capables de décrire le sous-domaine couvert par une ressource et le point de vue qui y est représenté, ils sont d'ordinaire peu à même de mesurer son adéquation proprement lexicale.
Ce problème du choix des ressources est souvent résolu de manière très empirique, ce qui ne permet pas de capitaliser d'une expérience à l'autre. Il est donc important de se doter de critères formels permettant de décrire le comportement d'une ressource par rapport à un corpus donné et d'en guider le choix. C'est l'objet de ce travail : nous proposons un premier ensemble de métriques pour apprécier la couverture d'un corpus par une ressource terminologique.
Difficultés
Pour des dictionnaires traditionnels, on exprime l'adéquation à un corpus en termes de couverture et on l'apprécie à partir du nombre d'occurrences de mots du corpus qui se rattachent à des entrées du dictionnaire. La couverture est plus difficile à définir pour des ressources terminologiques.
La première difficulté tient à la diversité des ressources terminologiques qui rend problématique leur comparaison. La nature de l'information diffère d'une ressource à l'autre. Au-delà des listes de termes, les termes eux-mêmes peuvent être typés et les types peuvent être organisés en hiérarchie (thesaurus). Dans les ressources les plus riches, les termes sont de surcroît liés entre eux par des liens sémantiques. Les ressources ont par ailleurs des degrés de spécialisation divers. Il est difficile de comparer un lexique de 10 000 unités qui comporterait de nombreuses unités également présentes dans des dictionnaires généralistes et un lexique de 500 unités dont très peu figurent dans des dictionnaires classiques. Les ressources s'opposent enfin par leur degré de lexicalisation : certaines se contentent de lister des étiquettes de concepts ; d'autres considèrent ces étiquettes dans leur dimension lexicale et linguistique. Ces dernières rendent compte des différentes formes sous lesquelles ces unités sémantiques peuvent se réaliser en corpus, jusqu'à associer des règles de désambiguïsation contextuelles aux unités polysémiques (Nédellec, Nazarenko, 2005).
La deuxième difficulté tient au fait qu'on cherche à confronter deux objets qui ne sont pas de même nature. La ressource et le corpus s'opposent comme la langue s'oppose au discours : il faut comparer un ensemble d'éléments de lexique (la ressource) avec un ensemble d'occurrences (le corpus). Par voie de conséquence, il faut aussi comparer des unités potentiellement polylexicales avec des occurrences observées en corpus, nécessairement monolexicales. Comme il s'agit d'apprécier a priori l'adéquation des ressources aux corpus, nous ne présupposons en effet aucune étape de reconnaissance terminologique préalable.
Dans ce premier travail, nous focalisons l'étude sur les ressources terminologiques considérées comme des listes de termes, sans exploiter les éventuelles informations qu'elles contiennent concernant leurs règles de variation, leur typage sémantique ou les relations sémantiques qu'ils entretiennent. Les premiers éléments étant posés, il est évidemment nécessaire de poursuivre, par exemple, en prenant en compte la désambiguïsation des termes polysémiques, les liens de variations entre termes et la structure sémantique. Ces points ne sont pas abordés ici.
État de l'art
La question de la sélection des ontologies pour une application prend de l'importance avec l'augmentation du nombre des ontologies disponibles et la standardisation de leurs formats. Cette préoccupation est au coeur de la problématique du web sémantique. (Buitelaar et al., 2004) montre que la création d'une bibliothèque d'ontologies (OntoSelect) suppose de définir des critères permettant de sélectionner une ontologie particulière. Trois critères sont proposés : les degrés de structuration et de connectivité sont des mesures proprement ontologiques, mais le critère de couverture est établi relativement à une collection de documents. Ce dernier critère est cependant défini de manière assez fruste 2 : il ne prend qu'imparfaitement en compte la dimension proprement linguistique des « étiquettes de concepts ». Brewster et al. (2004) vont plus loin. Ils proposent d'évaluer les ontologies relativement à un corpus donné. La notion de couverture qu'ils proposent est plus riche que la précédente. Elle repose sur le nombre de termes en corpus qui correspondent à des concepts de l'ontologie, une fois effectués un calcul de variation pour reconnaître des formes de termes non canoniques et une expansion sémantique pour autoriser une adéquation à différents niveaux de généralité. À partir de là, une ontologie est évaluée en fonction du nombre de concepts qui trouvent leur contrepartie en corpus. Ce deuxième travail prend davantage en compte la nature linguistique des réalisations lexicales des concepts en corpus (notion de variation, quasisynonymie entre un hyperonyme et son hyponyme) mais il est centré sur l'évaluation et la cohérence interne d'une ontologie alors que notre objectif est plutôt de guider le choix d'une ressource pour un corpus donné, ce qui confère un autre rôle à la notion de couverture et impose de la définir plus précisément.
Sur le plan lexical, la question de la couverture n'a guère été étudiée 3 . De manière intuitive, on tend à préférer des ressources de grande taille (en nombre d'entrées), avec l'idée qu'elles sont soit plus complètes sur un domaine restreint soit plus génériques et moins liées à un domaine particulier. (Nirenburg et al., 1996) critique ce présupposé en soulignant que la taille de la ressource donne une vue très partielle de sa couverture. Dans ce travail, les auteurs cherchent cependant à apprécier la qualité intrinsèque de la ressource alors que nous défendons l'idée qu'une ressource n'a pas de valeur propre et qu'elle n'a de valeur que par les utilisations qui peuvent en être faites. Au total, la question du choix de la ressource étant donné un corpus a moins retenu l'attention que la question ultérieure : une fois cette ressource choisie, comment l'adapter à ce corpus (Basili et al., 1998) ?
La statistique lexicale a souligné depuis ses débuts (Muller, 1977 ;Manning, Schütze, 1999) qu'il existe une relation fonctionnelle entre une ressource (un vocabulaire) et un corpus mais elle n'a pas abordé le problème des unités polylexicales que contiennent les terminologies.
Proposition de métriques
Afin d'apprécier l'adéquation d'une ressource à un corpus, nous proposons différentes mesures. Il s'agit de caractériser la couverture de la ressource ainsi que son degré de spécialisation.
Remarques terminologiques
Nous posons les définitions suivantes :
• Le texte T du corpus est un ensemble ordonné de mots 4 . Les mots sont définis par leur forme graphique et repérés par leur position dans le texte.
• Le vocabulaire V du corpus est l'ensemble des vocables, i.e. l'ensemble des mots différents du corpus. Les vocables sont des unités monolexicales.
• Le lexique L de la ressource est l'ensemble des lexies ou entrées lexicales de la ressource 5 , qu'elles soient composées de un ou plusieurs mots, spécialisées ou non. • La partie utile de la ressource PU est l'ensemble des lexies de la ressource qui apparaissent dans le corpus. C'est un sous-ensemble de L.
• La partie utile décomposée PUD est l'ensemble de tous les vocables des lexies de PU. Elle est obtenue par décomposition en vocables élémentaires des lexies de PU. En supposant que cette décomposition est faite selon les mêmes règles qui ont permis de segmenter le corpus, cet ensemble de vocables PUD correspond aussi à la partie du vocabulaire du corpus qui est reconnue (PR) par la ressource. On a donc PUD=PR, où PR est un sous-ensemble de V.
• La partie inutile de la ressource PNU est l'ensemble des lexies qui n'ont pas d'occurrence dans le corpus. C'est le complémentaire de PU par rapport à L.
• La partie inconnue du vocabulaire du corpus PNR est l'ensemble des vocables de V non reconnus par la ressource. C'est le complémentaire de PN par rapport à V.
Mesures
Les métriques que nous proposons pour apprécier l'adéquation d'une ressource terminologique à un corpus sont définies comme des rapports entre les différents ensembles définis ci-dessus. On peut distinguer les mesures qui portent sur les formes et celles qui portent sur les occurrences.
La première mesure permet d'apprécier le degré de spécialité de la ressource par rapport à un corpus. La contribution (Contr) est la proportion de lexies du lexique qui figurent en corpus. Elle est définie par la formule ci-dessous. Nous désignons par surplus (Surpl) la proportion de lexies « inutiles ». On retrouve ici la notion d'excès de ressource introduite par (Brewster et al., 2004). La contribution est forte si beaucoup des lexies de la ressource se retrouvent en corpus et donc si le domaine de spécialité de la ressource correspond bien à celui du corpus. À l'inverse, un surplus élevé indique que la ressource est relativement générique et donc potentiellement utile pour des corpus variés. Ces mesures étant indépendantes de la taille de la ressource, on peut comparer les contributions de ressources très différentes.
Contr = |PU| / |L| Surpl = 1 -Contr = |PNU| / |L| où |X| représente le cardinal de X
Une autre mesure permet d'apprécier dans quelle mesure la ressource « couvre » le vocabulaire du corpus. Pour avoir des ensembles comparables, il faut comparer la partie reconnue du vocabulaire et le vocabulaire dans son ensemble. Les deux mesures duales de la reconnaissance (Rec) et de l'ignorance (Ign) sont définies ci-dessous. La reconnaissance est la proportion des lexies décomposées reconnues en corpus par rapport au nombre total de vocables du corpus. La reconnaissance augmente 1) si on trouve dans le lexique les termes spécialisés employés dans le corpus mais aussi 2) quand la ressource comporte beaucoup de mots de la langue générale comme par exemple les mots grammaticaux. Seule la confrontation des différentes mesures permet de se faire une idée plus précise du comportement d'une ressource. Dans le cas 2, la forte reconnaissance tend à être associée à un surplus important. Une forte reconnaissance combinée à une contribution élevée indique une ressource spécifique bien adaptée au corpus considéré. La dernière mesure complète la mesure de couverture. C'est la densité (Dens), définie par la formule ci-dessous, où f PUD est la fréquence moyenne des lexies de PU dans le corpus et f V est la fréquence moyenne des vocables dans le corpus. C'est une mesure normalisée de la fréquence des lexies utiles en corpus. Pour avoir une mesure indépendante de la taille du corpus, la fréquence moyenne des lexies de PU est pondérée par la fréquence moyenne des vocables dans le corpus. On obtient donc les mesures suivantes : Contr=1, Rec=3/7, Couv=3/7. Notons que l'occurrence de la lexie système qui entre dans l'occurrence plus large de la lexie système de fichier n'est pas comptabilisée en tant que telle dans la couverture.
Re c = |PR| / |V| = |PUD| / |V| Ign = 1 -Re c = |PNR| / |V|
Résultats
Protocole expérimental
Nous avons testé ces métriques dans le cadre de projets de recherche et d'extraction d'information dans le domaine de la génomique. Ce type d'application spécialisée requiert en effet d'exploiter des ressources et le choix de/des ressource(s) à exploiter s'avère souvent délicat. Nous avons considéré différents corpus de génomique et différentes ressources terminologiques a priori assez bien adaptées au domaine d'application (Hamon, 2005
Analyse des résultats
Les calculs des différentes métriques pour les 5 ressources et les 4 corpus ci-dessus sont synthétisés dans les graphiques des figures 2 et 3. La troisième remarque concerne les deux mesures de reconnaissance et de couverture qui paraissent assez bien corrélées. On note une grande stabilité dans le sens et l'ampleur de leur écart : un corpus est d'autant mieux couvert que son vocabulaire est reconnu. C'est donc l'absence de corrélation qui est significative. Nos expériences montrent par exemple que le glossaire GlossBioch a une couverture nettement supérieure à celle de GO sur Transcript, pour une reconnaissance similaire. C'est le signe que GlossBioch reflète mieux la langue de spécialité du corpus Transcript, en dépit de sa taille modeste ( fig. 3), et la preuve que la taille des ressources n'est pas un critère suffisant. Dans ce cas particulier, les mesures font apparaître un comportement des ressources contraire aux intuitions initiales des biologistes qui recommandaient à tort d'utiliser GO.
Le dernier point porte sur la densité. Elle permet d'apprécier la fréquence des lexies en corpus. De manière surprenante, la plus forte densité s'observe pour une petite ressource très spécialisée (glossaire GoBMT, fig. 2) et pour le corpus le plus différent thématiquement (Carnivore). Seule l'analyse détaillée des lexies de la partie utile du glossaire permet de comprendre ce résultat contre-intuitif. Moins de 10% des lexies figurent dans le corpus mais ces lexies ont de fortes fréquences. On trouve notamment can (676 occ.), fish (121 occ.) tel (8 occ), tous les trois décrits dans la ressource comme des noms de gènes. Ce sont des mots ambigus reconnus à tort comme noms de gènes dans le corpus Carnivore. Une forte densité peut ainsi aussi bien refléter une bonne adéquation de la ressource en termes de spécialisation que des phénomènes d'ambiguïté. Une simple mesure de fréquence pondérée n'apparaît donc pas suffisamment éclairante. Il faudrait sans doute considérer le profil lexical des lexies de la partie utile de la ressource par rapport à l'ensemble des vocables du corpus pour pouvoir prédire la nature sémantique de la couverture. Ce profil devrait permettre d'apprécier la dispersion des fréquences et donc de mieux repérer des correspondances artificielles entre certains termes spécialisés et des occurrences de mots courants.
Conclusion et perspectives
Pour permettre de caractériser avec une certaine fiabilité et une certaine reproductibilité le comportement d'une ressource lexicale pour un corpus donné, nous avons défini et testé un ensemble de métriques qui donne une idée de la « couverture », notion vague mais très couramment utilisée qui prend de l'importance avec l'augmentation du nombre de ressources disponibles. Ces métriques ne peuvent prétendre suppléer une analyse précise de l'apport d'une ressource : elles visent à éclairer le choix des ressources et des traitements à mettre en oeuvre. Les expériences que nous avons menées montrent l'intérêt de ce type de métriques mais nous avons également souligné les limites des mesures proposées. Il faudrait définir une mesure de densité plus riche que nous ne l'avons fait et, pour compléter l'image globale de couverture que nous cherchons à construire, tenir compte de la répartition des occurrences des lexies de la ressource. La notion de couverture lexicale telle qu'elle est définie ici doit par ailleurs être étendue pour prendre en compte les variantes de lexies, leurs types sémantiques et même leurs relations sémantiques.
Figure 1 :
1Construction des ensembles de référence
Figure 2 .
2Mesures d'adéquation de différentes ressources à différents corpus : couverture et densité Les mesures gomment l'effet de taille aussi bien sur les corpus que sur les ressources. Le glossaire GlossBioch a une couverture similaire à celle de MeSH qui comporte pourtant 50 fois plus de termes (fig. 2). Le comportement des ressources est comparable pour le corpus Transcript et son sous-corpus Transcript-932 (fig. 3). On peut donc envisager de sélectionner une ressource à partir d'un sous-corpus sans chercher à projeter la ressource sur l'intégralité du corpus, ce qui facilite les expérimentations. La contribution fait exception cependant. Elle est à la fois sensible à la taille de la ressource et à celle du corpus : on remarque qu'elle est moindre pour un petit corpus (GloBioch pour transcript-932) et pour les ressources volumineuses (MESH pour Transcript). Malgré cette sensibilité aux effets de taille, c'est une mesure intéressante : une forte contribution pour une petite ressource est un bon indicateur de pertinence (cf. GoBMT et keywlist pour Transcript).
Figure 3 .
3Mesures de contribution, couverture et reconnaissance de 5 ressources sur 4 corpus : Transcript, Transcript-932 (932), Drosophile (1199-droso) et Carnivore
des vocables entrant dans les lexies de la partie utile de la ressource. Dans la formule ci-dessous, freq i représente le nombre d'occurrences d'une lexie i de PU non incluses dans une occurrence d'une autre lexie plus large et longueur i est la longueur de la lexie en nombre de mots. Dans le cas de termes enchâssés (p. ex. système et système de fichiers), seule l'occurrence du terme le plus large entre dans la mesure de fréquence. Cette mesure de couverture est indépendante de la taille du corpus, ce qui rend les mesures de couverture d'une ressource comparables même sur des corpus de taille différente.Parler de « couverture » évoque l'idée d'un corpus tout ou partiellement « couvert » par la
ressource. La couverture est donc calculée relativement au corpus plutôt qu'à son vocabulaire.
Nous définissons la couverture (Couv) comme la proportion d'occurrences de mots
correspondant à
On a |L|=2 et |T|=7. Dans ce cas particulier, on a |V|=|T|=7. Toutes les unités du lexique se retrouvant en corpus, on a par ailleurs PU=L et PUD=PR={système, de, fichiers}.€
Couv =
i =1
PU
∑ freq i x longeur i / |T|
Dens=f PUD /f V
3.3 Exemple
À titre d'exemple, considérons la ressource et le texte suivants :
• L={système, système de fichiers}
• T=« Il a réparé le système de fichiers »
Voir par exemple, UMLS (Unified Medical Language System, http://www.nlm.nih.gov/research/umls/).
«Coverage is measured by the number of labels for classes and properties that can be matched in the document».3 La notion de couverture lexicale n'est pas définie dans les ouvrages de statistique linguistique(Oakes, 1998).Quand la question est abordée(Manning, Schütze, 1999, p. 130), c'est uniquement pour apprécier le nombre de mots inconnus dans un texte. 4 La notion de « mot » est difficile à définir. Nous considérons ici comme mots les unités résultant d'une segmentation du texte, étant donné un algorithme de segmentation clairement défini. Dans les exemples présentés ici, tous les caractères d'espacement et de ponctuation sont considérés comme séparateurs de mots.
Comme nous l'avons souligné plus haut, nous ne considérons pas à ce stade les autres informations sémantiques apportées par la ressource.
www.ncbi.nlm.nih.gov 7 Flybase est une base de données structurées et bibliographiques sur la drosophile : http://flybase.bio.indiana.edu/ 8 Ces ressources sont disponibles aux adresses suivantes : keylist : ftp://ftp.expasy.org/databases/swiss-prot/release/keywlist.txt GO : http://www.geneontology.org/, version téléchargée en septembre 2002 MeSH : http://www.nlm.nih.gov/mesh/meshhome.html (Medical subject headings, Library of Medicine) GlossBioch : http://www.portlandpress.com/pcs/books/prod_det.dfm?product=1855780887 GoMBT : http://www.asheducationbook.org/cgi/content/full/2002/1/490
OntoSelect: A Dynamic Ontology Library with Support for Ontology Selection. P Buitelaar, T Eigner, Declerck T, Proc. of the Demo Session at the Int. Semantic Web Conf. of the Demo Session at the Int. Semantic Web ConfHiroshima, JapanBUITELAAR P., EIGNER T., DECLERCK T. (2004), OntoSelect: A Dynamic Ontology Library with Support for Ontology Selection, In Proc. of the Demo Session at the Int. Semantic Web Conf., Hiroshima, Japan, Nov. 2004.
Data Driven Ontology Evaluation. C Brewster, H Alani, S Dasmahapatra, Y Wilks, Proc. Of the Int. Conf. on Language Resources and Evaluation. Of the Int. Conf. on Language Resources and EvaluationLisbon, PortugalBREWSTER, C., Alani, H., DASMAHAPATRA, S. and WILKS, Y. (2004), Data Driven Ontology Evaluation. In Proc. Of the Int. Conf. on Language Resources and Evaluation (LREC 2004), Lisbon, Portugal.
An Empirical Approach to lexical Tuning. Basili R, M T Pazienza, M Stvenson, P Velardi, M Vindigni, Y Wilks, Proc. of the Workshop on Adaptating Lexical and Corpus Ressources to Sublanguages and Applications (First Int. Conf. on Language Resources and Evaluation LREC. P. VELARDIof the Workshop on Adaptating Lexical and Corpus Ressources to Sublanguages and Applications (First Int. Conf. on Language Resources and Evaluation LRECMay, GrenadaBASILI R., PAZIENZA M.T., STVENSON M., VELARDI P., VINDIGNI M., WILKS Y. (1998), An Empirical Approach to lexical Tuning, In Proc. of the Workshop on Adaptating Lexical and Corpus Ressources to Sublanguages and Applications (First Int. Conf. on Language Resources and Evaluation LREC 1998), P. VELARDI (ed.), May, Grenada.
Ontologie et réutilisabilité : expérience et discussion. J Charlet, B Bachimont, J Bouaud, P Zweigenbaum, Acquisition et Ingénierie des Connaissance, N. Aussenac , P. Laublet and C. Reynaud (éd.). ToulouseCépaduès-EditionsCHARLET J., BACHIMONT B., BOUAUD J., ZWEIGENBAUM P. (1996), Ontologie et réutilisabilité : expérience et discussion, in Acquisition et Ingénierie des Connaissance, N. Aussenac , P. Laublet and C. Reynaud (éd.), pp. 69-87, Cépaduès-Editions, Toulouse.
Indexing specialized documents : are terminological resources sufficient ?. H Hamon, Actes des 6èmes journées Terminologie et Intelligence Artificielle (TIA 2005). s des 6èmes journées Terminologie et Intelligence Artificielle (TIA 2005)RouenHAMON H. (2005), Indexing specialized documents : are terminological resources sufficient ?, in Actes des 6èmes journées Terminologie et Intelligence Artificielle (TIA 2005), pp. 71-82, Rouen.
Comparing sets of semantic relations in ontologies. Hovy E, Semantics of Relationships. R. GREEN, C.A. BEAN and S.H. MYAENGKluwer , Dordrecht, NLHOVY E. (2001), Comparing sets of semantic relations in ontologies. In Semantics of Relationships, R. GREEN, C.A. BEAN and S.H. MYAENG (eds.), chapter 6, Kluwer , Dordrecht, NL.
Méthodologie d'extraction automatique d'information à partir de la littérature en science en vue d'alimenter un nouveau système d'information. Application à la génétique moléculaire pour l'extraction de données sur les interactions. Pillet V, Aix-Marseille IIIThèse doctoratPILLET V. (2000), Méthodologie d'extraction automatique d'information à partir de la littérature en science en vue d'alimenter un nouveau système d'information. Application à la génétique moléculaire pour l'extraction de données sur les interactions. Thèse doctorat, Aix- Marseille III.
C Manning, H Schütze, Foundations of Statistical Natural Language Processing. The MIT PressMANNING C., SCHÜTZE H. (1999). Foundations of Statistical Natural Language Processing, The MIT Press.
Muller C, Principes et méthodes de statistique lexicale. ParisHachette UniversitéMULLER C. (1977), Principes et méthodes de statistique lexicale, Hachette Université, Paris.
Sur quelle sémantique reposent les méthodes automatiquesd. Nazarenko A, Hermès/Lavoisieraccès au contenu textuel ? Sémantique et corpus, A. CONDAMINES (coord.NAZARENKO A. (2005). Sur quelle sémantique reposent les méthodes automatiquesd'accès au contenu textuel ? Sémantique et corpus, A. CONDAMINES (coord.), ch. 6, pp. 211-244, Hermès/Lavoisier.
Ontology and Information Extraction : a necessary symbiosis. C Nedellec, Nazarenko A, IOS. Ontology Learning and Population, P. Buitelaar, P. Cimiano, B. Magninito appearNEDELLEC C., NAZARENKO A. (2005), Ontology and Information Extraction : a necessary symbiosis, in Ontology Learning and Population, P. Buitelaar, P. Cimiano, B. Magnini (eds), IOS (to appear).
Measuring semantic coverage. Mahesh K Nirenburg S, Beale S , Proc. of the 16th Conf. on Computational Linguistics (COLING'96). of the 16th Conf. on Computational Linguistics (COLING'96)Copenhagen Denmark, ACLNIRENBURG S., MAHESH K. and BEALE S. (1996), Measuring semantic coverage, in Proc. of the 16th Conf. on Computational Linguistics (COLING'96), Copenhagen Denmark, ACL, pp. 83- 88. |
||
245,855,708 | ICL's Submission to the WMT21 Critical Error Detection Shared Task | This paper presents Imperial College London's submissions to the WMT21 Quality Estimation (QE) Shared Task 3: Critical Error Detection. Our approach builds on cross-lingual pre-trained representations in a sequence classification model. We improve the base classifier by (i) adding a weighted sampler to deal with imbalanced data and (ii) introducing feature engineering, where features related to toxicity, named-entities and sentiment, which are potentially indicative of critical errors, are extracted using existing tools and integrated to the model in different ways. We train models with one type of feature at a time and ensemble those models that improve over the base classifier on the development set. Our official submissions achieve very competitive results, ranking second for three out of four language pairs. | [
207880568,
226237546,
208117506,
220045438
] | ICL's Submission to the WMT21 Critical Error Detection Shared Task
November 10-11, 2021
Genze Jiang genze.jiang20@imperial.ac.uk
Language and Multimodal AI (LAMA) Lab
Imperial College London
UK
Zhenhao Li zhenhao.li18@imperial.ac.uk
Language and Multimodal AI (LAMA) Lab
Imperial College London
UK
Lucia Specia l.specia@imperial.ac.uk
Language and Multimodal AI (LAMA) Lab
Imperial College London
UK
ICL's Submission to the WMT21 Critical Error Detection Shared Task
Proceedings of the Sixth Conference on Machine Translation (WMT)
the Sixth Conference on Machine Translation (WMT)November 10-11, 2021928
This paper presents Imperial College London's submissions to the WMT21 Quality Estimation (QE) Shared Task 3: Critical Error Detection. Our approach builds on cross-lingual pre-trained representations in a sequence classification model. We improve the base classifier by (i) adding a weighted sampler to deal with imbalanced data and (ii) introducing feature engineering, where features related to toxicity, named-entities and sentiment, which are potentially indicative of critical errors, are extracted using existing tools and integrated to the model in different ways. We train models with one type of feature at a time and ensemble those models that improve over the base classifier on the development set. Our official submissions achieve very competitive results, ranking second for three out of four language pairs.
Introduction
Critical Error Detection (CED) is a new task which has been introduced in the WMT21 Quality Estimation (QE) Shared Task. 1 The purpose of CED is to address a challenging problem in Machine Translation (MT): translations produced by state-of-the-art MT systems can be grammatical and fluent but do not always retain the meaning of the source text. More importantly, incorrect translations can be misleading and even have catastrophic consequences such as health, safety, legal, or financial implications. However, these can be hard errors to capture by general QE architectures, which have been shown to be prone towards relying mainly on the translated sentence (Sun et al., 2020).
According to the Shared Task definition, a critical translation error is a type of error that occurs when the meaning of the translation deviates from source sentence in a critical way. The task data (Section 2.1) includes five categories of such errors: deviation in toxicity (TOX), in named entities 1 http://statmt.org/wmt21/quality-estimation-task.html (NAM), in sentiment polarity or negation (SEN), or in numbers (NUM), or introduction of health or safety risks (SAF).
The baseline model for this task utilises the XLM-RoBERTa (Conneau et al., 2020) for sequence classification model, following the Mono-TransQuest architecture proposed by Ranasinghe et al. (2020). Inspired by the fact that these five critical error types refer to specific linguistic phenomena, we aim to bring additional information to the models on the presence of such phenomena. The intuition is that sentences containing certain types of linguistic features, such as named entities or dates, are more likely to lead to errors. Therefore, we first process the dataset to extract features reflecting the sentences' toxicity, sentiment and named entities, using off-the-shelf toolkits or APIs (Section 2.2). We then enhance the baseline architecture with this additional information. 2 We experiment with two approaches to take the additional features into account, at token and hidden state levels. We build multiple models taking one type of feature at a time and finally ensemble "promising" models. Promising models are those that lead to improvements over the baseline on the dev set (Section 2.3).
Our results comparing different features show that some of the features are indeed useful, but there is no general pattern that applies to all language pairs (Section 3.1). The official submission, which uses an ensemble of the models that lead to improvements over the baseline on the dev set for each language shows that ensembling only models with promising features are better than ensembling models with all kinds of features (Section 3.2). Upon manual inspection, we observed that additional features indeed help the model to make predictions but this is subject to the accuracy of features (Section 3.3).
Experiment Settings
Dataset
According to the description of WMT21 CED Shared Task, the dataset for this task was collected from Wikipedia comments (Wulczyn et al., 2017) in English with translations generated by the ML50 multilingual translation model (Tang et al., 2020), consisting of four language pairs: English-Czech (En-Cs), English-German (En-De), English-Japanese (En-Ja) and English-Chinese (En-Zh). The number of data samples in the training set differs for the four language pairs but is around 6500-8000. Each language pair has 1000 data samples in the dev set and 1000 data samples in the test set. For each sentence pair in the dataset, there are three labels given by three human annotators. The three labels are aggregated to the final label of the dataset using majority strategy. The final label is either ERR or NOT, where ERR means the translation has at least one critical error and NOT means the translation does not have a critical error. Table 1: Statistics of datasets for four language pairs. The distribution of labels for the test set is unknown as this is a blind evaluation task.
The dataset information for each language pair can be found in Table 1. As can be seen, the data is very imbalanced, with the En-Ja dataset suffering the most: the ERR label only accounts for 9.4% in the training set. The En-De dataset is the least imbalanced compared to other three language pairs, where the proportion of ERR label in En-De training set reaches 27.9%.
Features
We extract features reflecting sentences' toxicity score, sentiment and named entities. The expectation is that these features could be helpful in detecting critical errors since these errors stem from issues with the translation/introduction of these and other linguistic phenomena. Ideally we would have wanted to extract this information for both source and translated sentences to be able to perform some sort of comparison between the two, for example, presence of toxicity in the translation but not in the source sentence. However, we are limited by the availability of tools in the four language pairs, as we explain below.
For all features, our goal is to have a discrete representation which will allow us to easily incorporate them to the architecture, as will be explained in Section 2.3.2. Therefore, we need to threshold some of these features.
The toxicity score is produced by Perspective API, 3 which supports only English and German amongst our five languages. Based on some manual inspection of the predictions by Perspective, we consider that if the toxicity score of a sentence is greater than 0.5, the sentence will be regarded as toxic. We leave for future work experiments varying this threshold. Since this API does not support Czech, Japanese and Chinese, we were only able to extract a toxicity feature in the source sentences for En-Cs, En-Ja and En-Zh.
The sentiment score is produced by Google Cloud Natural Language API, 4 which supports English, German, Japanese and Chinese. Therefore, we can get the sentiment feature of both source sentence and translation for En-De, En-Ja and En-Zh. The score returned by this API is a float number ranged from -1 to 1. Empirically, we consider a sentence to be negative if the score is smaller than -0.2, and positive if the score is greater than 0.2, otherwise the sentence's sentiment is neutral. In our experiments, the sentiment feature is not applied to En-Cs because Czech is not supported by this API.
The information of named entities (NE) is extracted using spaCy, 5 which can recognise over 15 NE types. We count the number of named entities for each NE type and finally choose seven NE types with the highest counts as features. The description of the seven NE types can be found in Table 2. We extract named entities in both source sentence and translation for En-De, En-Ja and En-Zh. However, Czech is not supported by spaCy, therefore we do not use NE features for En-Cs.
Models
Baseline Model
The baseline model employs the MonoTransQuest framework (Ranasinghe et al., 2020), which is proposed for general quality estimation (QE) tasks and is shown in Figure 1. Essentially this is used to produce the baseline score of CED Shared Task. The model is based on a pre-trained XLM-RoBERTa transformer model (Conneau et al., 2020) and is used to perform sentence-level classification tasks. The model takes a sequence of tokens as input which starts with <s>, denoting [CLS] token, followed by tokens for source sentence and translation and ended with </s> token. The source sentence and its translation, separated by [SEP] token, are fed into one single transformer encoder at the same time. Then the output of the transformer encoder is fed into a classification head where cross-entropy is adopted as the loss function. We use pre-trained XLM-RoBERTa models released by HuggingFace's model repository (Wolf et al., 2020) for the implementation.
To alleviate the influence of imbalanced training data, a weighted sampler can be applied to the data loader during training. The weighted sampler is to make the label distribution in the training batch as balanced as possible. The weight of the sampler is computed as reciprocals of label proportions.
Model with Features
To utilise the features mentioned in Section 2.2, we proposed two different approaches. Figure 2: Architecture of the first approach (adding special tokens). We insert TOX/SEN/NE information to the source sentence and its translation as special tokens, and then feed sentences with special tokens to the baseline architecture.
The first approach (shown in Figure 2) is to add special tokens. Here the features (toxicity, sentiment, named entities) are directly inserted as special tokens to the input source sentence and, where available, its translation before getting tokenised. To correctly tokenise sentences with features, these special tokens are also added to the XLM-RoBERTa tokeniser. The remaining architecture is the same as the baseline model except for the dimension of model's word embeddings as the model's token embeddings should be resized when adding new tokens.
For the toxicity feature, a special token [TOX] is added to the beginning of the input token sequence if and only if the sentence is toxic. If the sentence is not toxic, the [TOX] token will not be added. For En-De, the [TOX] token is applied to both source sentence and translation. But for other three language pairs it is only applied to the source sentence (English), because the Perspective API does not support Czech, Japanese and Chinese.
For the sentiment feature, there are three special tokens, [SEN_POS], [SEN_NEG], [SEN_NEU], representing positive, negative and neutral sentiment respectively. Each time only one token denoting sentence's sentiment is added to the beginning of that sentence. All the sentences should have one sentiment token at the beginning. The sentiment token is applied to both source sentence and translation for En-De, En-Ja and En-Zh. We do not perform experiments on sentiment feature for En-Cs due to lack of support on Czech from sentiment analysis API.
For named-entities feature, there are seven special token pairs corresponding to seven types of named entities generated by SpaCy API, e.g.
[ORG] and [/ORG], [DAT] and [/DAT], etc. We use special token pairs to encase the named entities with relevant type in sentences at word level. Similarly to sentiment feature, the tokens of namedentities feature are also applied to both source and translation for En-De, En-Ja and En-Zh. Czech is not supported by spaCy, hence we do not apply this feature to En-Cs.
By adding extra features to the texts, we expect to guide the model with the toxicity/namedentities/sentiment information on the source sentence or the discrepancy of such information between the source sentence and the translation, which might indicate the existence of critical translation errors.
The second approach (shown in Figure 3) is to modify hidden states, where the extracted features are presented as numerals and appended to the hidden states of [CLS] token. Due to limited time, we only experimented with NE features using this approach. Since some named entity types are similar, they can be grouped as one type. In this approach, except for DAT, which is an independent category, we group ORG and PER as a category, CRD and ORD as a category, NRP and GPE as a category so that finally we have four categories. The feature that is used here is the count of the four NE types in source and target sentences. It is presented as a vector of length 8, where the first 4 numbers represent the counts of these NE categories for the source sentence, and the last 4 numbers are for the trans- lation. First we feed the source sentence and its translation into the XLM-RoBERTa encoder, then we append the vector of counts to the output of the [CLS] token. The modified hidden states is then fed to the classification head. Our expectation is that the additional information (vector of counts) could guide the classifier to give more accurate predictions, because a deviation in named entity counts may be indicative of critical errors. For example, if the source sentence contains 3 named entities and the translation contains only 1 named entity, the translation may be missing some named entities.
Ensemble
To boost the performance, we ensemble several models to produce the final predictions. We experiment with two ensemble strategies. One strategy is label-level (late) ensemble. We first obtain the label predictions generated by different models using different features, then combine these predicted labels by performing majority vote to get a final label. The other strategy is logit-level ensemble, where we average the logits produced by different models and then produce the final label using the averaged logits.
Results
This section presents the evaluation results of the proposed methods. Except for the baseline score on Table 3: Matthews's Correlation Coefficient (MCC) between predictions and gold labels using different methods on development set, trained on XLM-RoBERTa-base model. "Best" and "average" stand for the highest score and average score of three runs respectively. The bold numbers are the best result for the average of three runs in that language pair. For En-Cs, we only experiment on two cases due to lack of feature availability for Czech. test set in Section 3.2 which is produced by Mono-TransQuest using pretrained XLM-RoBERTa-base model with batch size of 128, learning rate of 2e-5, and a linear learning rate warm-up ratio of 10%, all the other scores (including baseline score on dev set in Section 3.1) are produced using following hyperparameters: 64 for batch size, 2e-5 for learning rate, 30% for the warm-up ratio.
Results on Dev Set
As described in Section 2.2, we explore nine feature types: source and target toxicity, source and target sentiment and 7 types of source and target named entities. We trained our model using the first approach (adding special token) for each of the nine feature types and the second approach (modifying hidden states) for named entities only. For each method or feature, we run the model for three times with different seeds and report average performance, as well as the performance of the best of the three models. The results on the development set are shown in Table 3.
The results follow our expectation that En-De could achieve the highest MCC score among the four language pairs as the training set of En-De is more balanced, compared to other three language pairs. Meanwhile, En-Ja has the lowest MCC score, as the dataset is the most imbalanced. The results also show that adding a weighted sampler to deal with imbalanced data improves the models' performance in most cases. As for the additional features, some of them are useful, but it depends on the language pair. For example, the toxicity feature can improve the score in En-De but cannot improve performance in En-Ja and En-Zh, while the sentiment token is helpful in En-Ja and En-Zh but not boost the score in En-De.
We note that the results may be affected by fluctuations because of different random seeds. Sometimes multiple runs of the same case will produce fairly different results. This is a general problem of neural models for QE as well as other tasks and requires further investigation. For example, the results of three runs of adding NE (NRP) feature in En-Ja vary a lot. The best score from the three runs is 0.247 which is over the baseline score, but the average score is 0.193 which is largely below the baseline.
Results on Test Set
We use ensembling to produce final results. The different models to ensemble are trained using different features, and hence focus on difference types of errors, thus potentially leading to different predictions. Not all these models lead to improvements over the base (no features) model; in fact, adding some features decreases the performance for some languages. Therefore, we tested ensembles of models with all feature and ensembles of only models with features which achieve higher score on the development set in our ablation experiments (Table 3). We found that ensembling all models leads to a lower score than ensembling the best few models.
In our experiments, ensembling models with bet-ter performance than the base model improves the results of all languages except En-De. For that language pair, the best result is achieved by only adding NE (DAT) token to the XLM-RoBERTalarge base architecture. The final results of submission for CED shared task and ranks are shown in Table 4. The source sentence is toxic and has negative sentiment. But the sentiment of translation is positive. Therefore, there is a deviation in sentiment between source and translation and this is a critical error.
Source
Upon further research I have found irrefutable proof that he got the nickname for the masterful way he cleaves beavers with his massive member. Translation 经过进一步的研究, 我发现了不可否认的证据, 那就是他用他巨大的成员把贝弗切开的巧妙方 法获得了绰号。 Baseline label NOT Our label ERR True label ERR Analysis
The source sentence is not toxic and the sentiments of both sentences are neutral. However, the machine translator mistakenly regards "beavers" as a name and produce a name in Chinese, which is detected by spaCy. The translation introduces one named entity which does not exist in source sentence. Therefore, this is a critical error.
Source
REDIRECT Talk:Historical Archive of the City of Cologne Translation 主题演讲: 科隆市历史档案 Baseline label NOT Our label ERR True label NOT Analysis
In this case, spaCy does not report "Cologne" as a named entity in source sentence, but in translation it reports the city name in Chinese as a named entity (GPE). Therefore, our model regards the translation introduces a new named entity. There is a deviation in named entities between source and translation and this is mistakenly classified as a critical error.
Source
Goanikontes is an oasis is hidden within the Goanikontes Region. Translation 戈亚尼科恩特是戈亚尼科恩特地区内的一个绿洲。 Baseline label NOT Our label ERR True label NOT Analysis
Similarly to previous case, spaCy correctly detects "Goanikontes" as a location name, but in translation spaCy mistakenly reports the corresponding location name in Chinese as person's name. Hence, our model thinks there is a deviation in named entities and predicts this case to a critical error. The mistakes from APIs are likely to lead the model to give wrong predictions.
Qualitative Analysis
We conducted manual inspection on En-Zh in an attempt to understand whether the additional features actually contribute to better performance. The choice of the language pair that we analysed was determined by the availability of understanding languages in both sides. We compared our final submitted predictions on test set with the baseline predictions. We found that, compared to the baseline result, our final model predicts more ERR labels. 82 out of 1000 samples' label in test set are flipped from NOT to ERR, among which 35 samples are correct change (from false negative to true positive), 47 are incorrect (from true negative to false positive). We give some examples in Table 5 to compare our predictions with the baseline results. These examples show that feature engineering actually pushes the model to predict more ERRs. Overall this improves the performance to some extent, but is subjected to the correctness of the feature extractor. Inaccurate results from APIs will give the model wrong information and limit the improvement of performance of our models.
Conclusions
This paper describes our submission to sentencelevel CED task in WMT21. Our work extends the baseline MonoTransQuest architecture by exploring feature engineering and model ensembling, as well as weighted sampling to deal with imbalanced datasets. Potentially due to the skewed distribution of labels in the dataset, the model performance varies substantially over different runs. However, our results averaged over multiple random seeds show that our feature engineering and ensembling lead to large improvements over the baseline. Our official submission achieves the 2nd position in En-Cs, En-De, En-Ja, and the 4th postion in En-Zh.
Figure 1 :
1Architecture of baseline model. This is a MonoTransQuest model where we pass the output of the [CLS] token to a classifier.
Table 2 :
2Descriptions of seven types of NE features and their abbreviations.XLM-RoBERTa
...
...
[CLS]
Token 1
[SEP]
Token N
Token 1
Token N
T1
TN
T[SEP]
T'1
T'N
E[CLS]
E1
EN
E[SEP]
E'1
E'N
C
Classifier
Tokenizer
Source sentence
Translation
Table 4 :
4Final results and ranks of CED shared task in WMT 2021. These results can also be found on CodaLab result page where the name of our team is gjiang. The metric in the table is Matthews's Correlation Coefficient (MCC) between predictions and gold labels. Below our score for each language pair is the model(s) that we used to achieve this score. "(base)" and "(large)" denotes this model is trained using XLM-RoBERTa-base and XLM-RoBERTa-large respectively. YOU SUCK IT!!! AS YOU'RE USED TO SUCK PHALLUS, NAZI HINDUIST LIKE HITLER!!! Translation 你吸了它, 就像你以前吸过帕卢斯一样, 纳西迷人就像希特勒一样!Source
Baseline label
NOT
Our label
ERR
True label
ERR
Analysis
Table 5 :
5Case study: comparison of baseline predictions and our ensembled predictions in En-Zh
Our code and data are available from https://github.com/ conanjgz/critical-error-detection-for-MT
https://www.perspectiveapi.com/ 4 https://cloud.google.com/natural-language 5 https://spacy.io/
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, 10.18653/v1/2020.acl-main.747Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.
TransQuest: Translation quality estimation with cross-lingual transformers. Tharindu Ranasinghe, Constantin Orasan, Ruslan Mitkov, 10.18653/v1/2020.coling-main.445Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainInternational Committee on Computational LinguisticsTharindu Ranasinghe, Constantin Orasan, and Ruslan Mitkov. 2020. TransQuest: Translation quality esti- mation with cross-lingual transformers. In Proceed- ings of the 28th International Conference on Com- putational Linguistics, pages 5070-5081, Barcelona, Spain (Online). International Committee on Compu- tational Linguistics.
Are we estimating or guesstimating translation quality?. Shuo Sun, Francisco Guzmán, Lucia Specia, 10.18653/v1/2020.acl-main.558Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnline. Association for Computational LinguisticsShuo Sun, Francisco Guzmán, and Lucia Specia. 2020. Are we estimating or guesstimating translation qual- ity? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6262-6267, Online. Association for Compu- tational Linguistics.
Jiatao Gu, and Angela Fan. 2020. Multilingual translation with extensible multilingual pretraining and finetuning. Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Naman Goyal, Vishrav Chaudhary, Yuqing Tang, Chau Tran, Xian Li, Peng-Jen Chen, Na- man Goyal, Vishrav Chaudhary, Jiatao Gu, and An- gela Fan. 2020. Multilingual translation with exten- sible multilingual pretraining and finetuning.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnline. Association for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transformers: State-of-the-art natural language pro- cessing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Asso- ciation for Computational Linguistics.
Ellery Wulczyn, Nithum Thain, Lucas Dixon, 10.6084/m9.figshare.4264973.v3Wikipedia Talk Corpus. Ellery Wulczyn, Nithum Thain, and Lucas Dixon. 2017. Wikipedia Talk Corpus. |
204,807,333 | Computational Argumentation Synthesis as a Language Modeling Task | Synthesis approaches in computational argumentation so far are restricted to generating claim-like argument units or short summaries of debates. Ultimately, however, we expect computers to generate whole new arguments for a given stance towards some topic, backing up claims following argumentative and rhetorical considerations. In this paper, we approach such an argumentation synthesis as a language modeling task. In our language model, argumentative discourse units are the "words", and arguments represent the "sentences". Given a pool of units for any unseen topic-stance pair, the model selects a set of unit types according to a basic rhetorical strategy (logos vs. pathos), arranges the structure of the types based on the units' argumentative roles, and finally "phrases" an argument by instantiating the structure with semantically coherent units from the pool. Our evaluation suggests that the model can, to some extent, mimic the human synthesis of strategy-specific arguments. | [
52009442,
71907,
16723397,
18231353,
28193257,
14599021,
3894771,
16075189,
1951927,
1957433,
44099358,
3083231
] | Computational Argumentation Synthesis as a Language Modeling Task
28 Oct -1 Nov, 2019
Roxanne El Baff
Bauhaus-Universität Weimar
WeimarGermany
Henning Wachsmuth henningw@upb.de
Paderborn University
PaderbornGermany
Khalid Al-Khatib
Bauhaus-Universität Weimar
WeimarGermany
Manfred Stede stede@uni-potsdam.de
University of Potsdam
PotsdamGermany
Benno Stein
Bauhaus-Universität Weimar
WeimarGermany
Computational Argumentation Synthesis as a Language Modeling Task
Proceedings of The 12th International Conference on Natural Language Generation
The 12th International Conference on Natural Language GenerationTokyo, Japan28 Oct -1 Nov, 201954
Synthesis approaches in computational argumentation so far are restricted to generating claim-like argument units or short summaries of debates. Ultimately, however, we expect computers to generate whole new arguments for a given stance towards some topic, backing up claims following argumentative and rhetorical considerations. In this paper, we approach such an argumentation synthesis as a language modeling task. In our language model, argumentative discourse units are the "words", and arguments represent the "sentences". Given a pool of units for any unseen topic-stance pair, the model selects a set of unit types according to a basic rhetorical strategy (logos vs. pathos), arranges the structure of the types based on the units' argumentative roles, and finally "phrases" an argument by instantiating the structure with semantically coherent units from the pool. Our evaluation suggests that the model can, to some extent, mimic the human synthesis of strategy-specific arguments.
Introduction
Existing research on computational argumentation largely focuses on the analysis side. Various analysis tasks are widely studied including identifying the claims along with their supporting premises (Stab and Gurevych, 2014), finding the relation between argumentative units (Cocarascu and Toni, 2017), and assessing the persuasiveness of arguments (Habernal and Gurevych, 2016).
Diverse downstream applications, however, necessitate the development of argumentation synthesis technologies. For example, synthesis is needed to produce a summary of arguments for a given topic (Wang and Ling, 2016) or to build a debating system where new arguments are exchanged between the users and the system (Le et al., 2018).
As a result, a number of recent studies addresses the argumentation synthesis task. These studies have proposed different approaches to generating claims or reasons for a given topic, partly with a particular stance towards the topic (Bilu and Slonim, 2016;Hua and Wang, 2018). However, the next important synthesis step is still missing in the literature, namely, to generate complete texts including both argumentative and rhetorical considerations. With the latter, we refer to Aristotle's three means of persuasion: logos (providing logical arguments), ethos (demonstrating credibility), and pathos (evoking emotions). As discussed by Wachsmuth et al. (2018), following a rhetorical strategy is key to achieving persuasion with argumentative texts. This paper proposes a new computational approach that synthesizes argumentative texts following a rhetorical strategy. We do not tackle this task immediately "in the wild", i.e., generating an entirely new argumentative text for a freely-chosen topic and a possibly complex strategy. Rather, we consider a "controlled" synthesis setting, with the goal of successively creating models that are able to deal with more complex settings later on.
In particular, given a pool of argumentative discourse units (ADUs), our approach generates arguments for any unseen pair of topic and stance (e.g., "con abortion") as well as a basic rhetorical strategy (i.e., logos-oriented vs. pathos-oriented). 1 To abstract from the arguments' topics during training, we first identify different ADU types using clustering. Our approach then learns to select unit types matching the given strategy and to arrange them according to their argumentative roles. Both steps are realized as a language model where ADUs represent words and arguments are sentences. Finally, our approach "phrases" an argument by predicting the best set of semantically related ADUs for the arranged structure using supervised regression. Thereby, we ensure that the synthesized texts are composed of meaningful units, a property that neural generation methods barely achieve so far.
In our evaluation, we utilize the dataset of Wachsmuth et al. (2018). This dataset contains 260 argumentative texts on 10 topic-stance pairs, where each text composes five ADUs in a logos-oriented or pathos-oriented manner. In our experiments, we train our approach on nine topic-stance pairs and then generate an argument for the tenth. The results demonstrate that our approach successfully manages to combine pairs of ADUs, but its performance on longer sequences of ADUs is limited.
Altogether, our contribution is three-fold:
1. A new view of argumentation synthesis that represents argumentative and rhetorical considerations with language modeling.
2.
A novel approach that selects, arranges, and phrases ADUs to synthesize strategy-specific arguments for any topic and stance.
3. First experimental evidence that arguments with basic rhetorical strategies can be synthesized computationally. 2
Related Work
Recently, some researchers have tackled argumentation synthesis statistically with neural networks. For instance, Wang and Ling (2016) employed a sequence-to-sequence model to generate summaries of argumentative texts, and Hua and Wang (2018) did similar to generate counterarguments. Using neural methods in text generation, it is possible to achieve output that is on topic and grammatically (more or less) correct. However, when the desired text is to span multiple sentences, the generated text regularly suffers from incoherence and repetitiveness, as for instance discussed by Holtzman et al. (2018) who examine texts that were produced by RNNs in various domains. While these problems may be tolerable to some extent in some applications, such as chatbots, bad text cannot be accepted in an argumentative or debating scenario, where the goal is to convince or persuade a reader (rather than to merely inform or entertain). Holtzman et al. (2018) propose to alleviate incoherence and repetitiveness by training a set of discriminators, which aim to ensure that a text respects the Gricean maxims of quantity, quality, relation, and manner (Grice, 1975). To this end, they employ specific datasets, such as one that opposes authentic text continuation to randomly-sampled text. The discriminators learn optimal weightings for the various models and their combination, such that overall text quality is maximized. For argumentation, we hypothesize that one needs to go even further and eventually account for the author, implementing her underlying intention in the different parts of an argumentative text as well as in the relations between the parts.
In the past times of rule-based text generation, argumentation synthesis was a popular task (Zukerman et al., 2000). Approaches involved much handcrafted (linguistic and domain) knowledge and user modeling. For example, the system of Carenini and Moore (2006) compares attributes of houses (from a database) to desired target attributes (from a user model), to then recommend a house to the reader in a convincing text following the Gricean maxims. To this end, it selected house attributes potentially interesting to the user, arranged, and finally phrased them. The resulting texts resembled the arguments we work with here, which have been manually composed by experts (Wachsmuth et al., 2018) from the claims, evidence, and objections in the arg-microtext corpus (Peldszus and Stede, 2016). To achieve a similar level of output control, today's text-to-text generation models need to account for the various interdependencies between the text units to be combined.
Most related to our approach is the system of , where a user can enter a claimlike topic along with a stance. The system then generates argumentative paragraphs on specific aspects of the topic by selecting sentences from 10 million news texts of the Gigaword corpus. Potentially relevant aspects are those that trigger evaluative judgment in the reader. The sentences are arranged so that the text starts with a claim sentence and is followed by support sentences, employing the approach of . The support sentences are ordered by maximizing the semantic connectivity between sentences. Finally, some rephrasing is done in terms of certain aspects of surface realization. In a manual evaluation, however, no text was seen as sounding natural, underlining the difficulty of the task. In contrast to , we learn directly from input data what argumentative discourse units to combine and how to arrange them. We leave surface realization aside to keep the focus on the argument composition.
Role ID Argumentative Discourse Unit
Thesis t1 German universities should on no account charge tuition fees t2 the universities in Germany should not under any circumstances charge tuition fees t3 tuition fees should not generally be charged by universities t4 universities should not charge tuition fees in Germany Con c1 one could argue that an increase in tuition fees would allow institutions to be better equipped c2 those who study later decide this early on, anyway c3 to oblige non-academics to finance others' degrees through taxes is not just c4 unfortunately sponsoring can lead to disagreeable dependencies in some cases Pro p1 education and training are fundamental rights which the state, the society must provide p2 education must not be a question of money in a wealthy society such as Germany p3 fees result in longer durations of studies p4 funding-wise it ought to be considered how costs incurred by students from other (federal) states can be reimbursed p5 if a university lacks the funds, sponsors must be found p6 longer durations of studies are costly p7 studying and taking higher degrees must remain a basic right for everyone p8 there are other instruments to motivate tighter discipline while studying p9 this would impede or prevent access to those who are financially weaker p10 this would mean that only those people with wealthy parents or a previous education and a part-time job while studying would be able to apply for a degree programme in the first place p11 universities are for all citizens, independent of their finances p12 what is the good of a wonderfully outfitted university if it doesn't actually allow the majority of clever people to broaden their horizons with all that great equipment Topic Should all universities in Germany charge tuition fees? Stance Con Some other approaches have been proposed that recompose existing text segments in new arguments. In particular, Bilu and Slonim (2016) generated new claims by "recycling" topics and predicates that were found in a database of claims. Claim selection involves preferring predicates that are generally amenable to claim units and that are relevant for the target topic. Egan et al. (2016) created summaries of the main points in a debate, and synthesized complete arguments from a set of manually curated topic-stance relations based on the fine-grained argument model of Toulmin (1958). However, we are not aware of any approach that synthesizes arguments fully automatically, let alone that follows rhetorical considerations in the synthesis process.
Data
To develop our model for argumentation synthesis, we exploit the dataset recently developed by Wachsmuth et al. (2018). The dataset comprises 260 manually generated argumentative texts. The generation of each text, for one topic-stance pair, has been conducted in a systematic fashion following the three canons of rhetoric (Aristotle, 2007):
1. Inventio ∼ Selecting a subset of argumentative discourse units (ADUs) from a pool of given ADUs for a topic-stance pair.
2. Dispositio ∼ Arranging the selected ADUs in a sequential order.
3. Elocutio ∼ Phrasing the arranged ADUs by adding connectives at unit-initial or unit-final positions.
Specifically, Wachsmuth et al. (2018) selected a pool of 200 ADUs for 10 pairs of controversial topic and stance from the English version of the arg-microtexts corpus (Peldszus and Stede, 2016). As a preprocessing step, they "decontextualized" these ADUs manually by removing connectives, resolving pronouns, and similar. Each topic-stance pair comes with 20 such ADUs: four theses, four con units, and 12 pro units. Table 1 shows the ADU list for one topic-stance pair. 26 participants were asked by Wachsmuth et al. (2018) to create short argumentative texts for each topic-stance pair following one of two basic rhetorical strategies: (1) logos-oriented, i.e., arguing logically, and (2) pathos-oriented, i.e., arguing based on emotional appeals. For each topic-stance pair they created an argument by selecting one thesis, one con and three pro units that they thought could best form a persuasive argument following the given strategies. Table 2 shows two samples of generated arguments in the dataset.
The dataset contains 130 logos-oriented and 130
Strategy ID Text Manually Synthesized From Five Argumentative Discourse Units
Logos-oriented c1 one could argue that an increase in tuition fees would allow institutions to be better equipped, t1 however German universities should on no account charge tuition fees. p1 education and training are fundamental rights which the state, the society must provide, p12 because what is the good of a wonderfully outfitted university if it doesn't actually allow the majority of clever people to broaden their horizons with all that great equipment. p4 Besides, funding-wise it ought to be considered how costs incurred by students from other (federal) states can be reimbursed.
Pathos-oriented p1 education and training are fundamental rights which the state, the society must provide. t2 This is why the universities in Germany should not under any circumstances charge tuition fees. c1 one could argue that an increase in tuition fees would allow institutions to be better equipped, p3 however fees result in longer durations of studies p6 and longer durations of studies are costly. Table 2: two sample arguments manually synthesized from the ADUs in Table 1, which are included in the dataset of Wachsmuth et al. (2018). The italiced connectives were added by the participants; they are not part of the ADUs.
pathos-oriented argumentative texts. We use these 260 texts to develop and evaluate our computational model for argumentation synthesis.
Approach
This section presents our computational approach to synthesize arguments for any pair of topic and stance, following one of two basic rhetorical strategies: arguing logically (logos-oriented) or arguing emotionally (pathos-oriented). A black-box view of the approach is shown in Figure 1. As input, our approach takes a strategy as well as a pool of argumentative discourse units (ADUs) for any specific topic-stance pair x. Each ADU has the role of a thesis (in terms of claim with a stance on the topic), a con point (objecting the thesis), or a pro point (supporting the thesis). The approach then imitates the human selection, arrangement, and "phrasing" of a sequence of n ADUs, in order to synthesize an argument. Phrasing is done only in terms of picking semantically coherent ADUs for the arranged sequence; the addition of connectives between ADUs is left to future work.
Below, we detail how we realize each step (selection, arrangement, and phrasing) with a topicindependent model. For each step, we explain how it is trained (illustrated in Figure 2) and how it is applied to an unseen topic-stance pair (Figure 3).
Selection Language Model
This model handles the selection of a set of n ADUs for a topic-stance pair x and a rhetorical strategy. We approach the selection as a language modeling task where each ADU is a "word" of our language model and each argument a "sentence". To abstract from topic, the model actually selects ADU types, as explained in the following. Figure 1: Black-box view of our argumentation synthesis approach. The input is a rhetorical strategy as well as a pool of thesis, con, and pro ADUs for some topicstance pair x. The approach outputs a strategy-specific sequence of n ADUs as an argument for x (here, n = 5).
Argumentation synthesis
Training of the Model
We start from a training set of ADUs for a set of m topic-stance pairs. To generalize the language model beyond the covered topics, each ADU is represented using features that aim to capture general emotion-related and logic-related characteristics, accounting for the two given strategies.
In particular, we first cluster the pool of all training ADUs based on their feature representation. As a result, each ADU is represented by a cluster label (A-F in Figure 2), where each label represents one ADU type. Now, for each of the strategies, we map each manually-generated sequence of ADUs to a sequence of cluster labels. Using these sequences of labels, we train one separated selection language model for each strategy.
For clustering, we rely on topic-independent features that we expect to implicitly encode logical and emotional strategies: (1) psychological meaningfulness (Pennebaker et al., 2015), (2) eight basic emotions (Plutchik, 1980;Mohammad and Turney, 2013), and (3) argumentativeness (Somasundaran et al., 2007). In the following, we elaborate on the concrete features that we extract: Figure 2: Illustration of training the three models of our argumentation synthesis approach. The input is a corpus of argumentative texts for m topic-stance pairs, each decomposed into a sequence of theses, con units, and pro units. Initially, the set of all these ADUs is clustered to obtain a set topic-independent ADU types, called A-F here.
(1) Selection language model: Each argument is converted from a sequence of ADUs to a sequence of ADU types, where a language model is trained on these type sequences.
(2) Arrangement language model: Each argument is converted from a sequence of ADUs to a sequence of ADU roles (thesis, pro, and con) where a language model is trained on these ADU role sequences. (3) Phrasing regression model: A linear regression model is trained which scores each ADU sequence with respect to its semantic coherence.
Linguistic Inquiry and Word Count (LIWC)
LIWC is a lexicon-based text analysis that counts words in psychologically meaningful categories (Tausczik and Pennebaker, 2010). We use the version by Pennebaker et al. (2015), which contains the following 15 dimensions:
1. Language metrics, e.g., words per sentence.
2. Function words, e.g., pronouns and auxiliary verbs.
3. Other grammar, e.g., common verbs and comparisons.
4. Affect words, e.g., positive emotion words.
5. Social words, e.g., "family" and "friends".
6. Cognitive processes, e.g., "discrepancies" and "certainty".
7. Perceptual processes, e.g., "feeling".
8. Biological processes, e.g., "health".
9. Core drives and needs, e.g., "power" and "reward focused".
10. Time orientation, e.g., past-focused.
11. Relativity, e.g., "time" and "space".
12. Personal concerns, e.g., "work" and "leisure".
13. Informal speech, e.g., fillers and nonfluencies.
14. Punctuation, e.g., periods and commas.
15. Summary variables, as detailed below.
There are four summary variables, each of which is derived from various LIWC dimensions: (1) analytical thinking (Pennebaker et al., 2014), i.e., the degree to which people use narrative language (low value), or more logical and formal language (high);
NRC Emotional and Sentiment Lexicons
We use the NRC lexicon of Mohammad and Turney (2013). The lexicon has been compiled manually using crowdsourcing and contains a set of English words and their associations with (1) sentiment, i.e., negative and positive polarities, and (2) emotions, i.e., the eight basic emotions defined by Plutchik (1980): anger, anticipation, disgust, fear, joy, surprise, sadness, and trust. These features are represented as the count of words associated with each category (e.g., the count of sad words in an ADU). Somasundaran et al. (2007) constructed a lexicon that includes the following arguing patterns: assessments, doubt, authority, emphasis, necessity, causation, generalization, structure, conditionals, inconsistency, possibility, wants, contrast, priority, difficulty, inyour- Figure 3: Illustration of applying our synthesis approach. Given the predicted type of each input ADU of the given topic-stance pair x, (1) the selection generates the most probable type sequence, (A, B, C, D, C). From the type sequence, a set of candidate arguments is decoded. (2) The arrangement filters out candidates not matching the most probable ADU role sequence, (T hesis, Con, P ro, P ro, P ro). (3) Phrasing scores each remaining argument and outputs the top argument.
MPQA Arguing Lexicon
shoes, rhetorical question. We use the count of each arguing pattern in text as one feature (e.g., number of assessments patterns in an ADU).
Application of the Model
As shown in Figure 3, the selection language model takes the ADUs of an unseen topic-stance x as input. It then outputs a set of candidate arguments, in terms of sequences of ADUs. Each ADU is encoded into a cluster label (representing an ADU type). For example, one might have the following mappings, given the six labels A-F from Figure 2:
A ← {T he x,1 , T he x,2 , Con x,3 } B ← {Con x,2 , P ro x,1 } C ← {T he x,3 , Con x,c , P ro x,2 , P ro x,3 } D ← {P ro x,p , Con x,1 } E ← {T he x,t } F ← {T he x,4 , Con x,4 , P ro x,4 }
The language model for either of the two rhetorical strategies generates a set of arguments where each argument is composed of n cluster labels, e.g., (A, B, C, D, C) for n = 5 in Figure 3. This set is ranked by probability of the associated sequence. For example, assume that (A, B, C, D, C) is most probable. Then we decode all possible ADU sequences for topic-stance x from (A, B, C, D, C) to a set of candidate arguments: (A, B, C, D, C) →
{T he x,1 , T he x,2 , Con x,3 } × {Con x,2 , P ro x,1 } × {T he x,3 , Con x,c , P ro x,2 , P ro x,3 } × {P ro x,p , Con x,1 } × {T he x,3 , Con x,c , P ro x,2 , P ro x,3 }
The output of the model is a set of candidate arguments, which becomes the input of the arrangement language model.
Arrangement Language Model
In the arrangement process, we aim to imitate the human behavior of arranging ADUs for a specific topic-stance following a rhetorical strategy (here, logos or pathos). Again, we approach this problem as a language modeling task. Each ADU role (thesis, pro, or con) is a word of the language model and each argument a sentence.
Training of the Model
As sketched in Figure 2, we first convert the humangenerated arguments from a sequence of ADUs to a sequence of ADU roles. Then, we use these sequences to train a language model for each strategy.
Application of the Model
As shown in Figure 3, the arrangement language model takes as input the candidate arguments that we get from the selection language model and outputs a set of filtered candidate arguments.
The language model for a specific strategy generates a set of argument structures where each such structure is a sequence of n ADU roles, e.g., (T hesis, Con, P ro, P ro, P ro) for n = 5 in Figure 3. This set is ranked by the probability of the sequences. For example, assume that the most frequent sequence is (T hesis, Con, P ro, P ro, P ro).
Using the output from the selection language model, we filter out all candidate arguments that do not match (T hesis, Con, P ro, P ro, P ro), ending up with the following filtered arguments:
{T he x,1 , T he x,2 } × {Con x,2 } × {P ro x,2 , P ro x,3 } × {P ro x,p } × {P ro x,2 , P ro x,3 }
The output of the model is a filtered set of candidate arguments, which becomes the input of the phrasing regression model.
Phrasing Regression Model
The set of arguments resulting from the selection and arrangement language models are based on topic-independent features. The missing step is to entail the topical relationship between the ADUs in each generated argument. We approach this task with supervised regression. As indicated above, our model does not really phrase an argument. Rather, it aims to choose the best among the given set of candidates in terms of semantic coherence.
Training of the Model
For each argument, we opt for a feature representation that embeds the content properties of ADUs in order to capture their content relationship. Concretely, we represent each argument by calculating the semantic similarities of each adjacent bigram in a human-generated argument. We train a linear regression model where each instance represents the features of one argument. To this end, we set a score to be the sum of the probabilities of ADU bigrams occurring in one argument.
The phrasing model scores each of the filtered arguments given as output by the arrangement model. The argument with the highest score is the final generated argument.
Application of the Model
At this point, the phrasing model is provided by the filtered arguments from the arrangement model. For each filtered argument, we extract the bigram features (semantic similarities). Next, using the phrasing model, we predict the score of each sequence. The sequence with the highest score is the generated argument. In Figure 3, this is:
(T he x,2 , Con x,2 , P ro x,2 , P ro x,p , P ro x,3 )
Experiments
In this section, we report the results of evaluating the introduced approach to argumentation synthesis Strategy 2-grams 3-grams Logos-oriented 9,110.6 9,466.3 Pathos-oriented 7,939.5 10,279.6 Table 3: Selection. Perplexity of the 2-gram and 3gram language models for each strategy, averaged over 10 leave-one-topic-out runs using Laplace smoothing. based on the dataset described in Section 3.
Experimental Set-up
Our experiments are designed in leave-one-topicout cross-validation setting: From the 10 topicstance pairs in the dataset, we use nine for training and the last as the test fold, and we repeat this once for each possible fold. This way, no topic-specific knowledge can be used in the synthesis process. For each given basic rhetorical strategy (logosoriented and pathos-oriented), we train one model each for the selection, the arrangement, and the "phrasing" of argumentative discourse units (ADUs) on the nine training folds. The arguments synthesized by their combination are then evaluated against the human-generated arguments in the test folds. The evaluation covers all three models as well as the final generated argument for each strategy. We report the average accuracy across all ten folds for each of the models.
Training: Selection Language Model
In each training/test experiment for one of the two strategies, we first abstract all ADUs across all strategy-specific topic-stance pairs by extracting the LIWC, NRC, and MPQA features, as described in Section 4.1. Then, we cluster the given training set using standard k-means (Ostrovsky et al., 2012). After some initial experiments, we decide to set k to 6, because this best balanced the distribution of arguments over clusters, and showed clear strategyspecific differences. 3 Using the resulting clustering model, we predicted the type A-F of each ADU in the test set (the tenth topic).
Given the ADU types, we next converted the human-generated training and test arguments from a sequence of ADUs to a sequence of ADU types. After that, we trained one 2-gram and one 3-gram selection language. 4 In Table 3, we report the mean perplexity of the models for both strategies. Table 4: Arrangement. Perplexity of the 2-gram and 3gram language models for each strategy, averaged over 10 leave-one-topic-out runs using Laplace smoothing.
As shown, the 2-gram perplexity is lower than the 3-gram perplexity in both cases. We assume that the reason lies in the limited size of the dataset and the narrow setting: Only 117 sentences (ADUs) are given per strategy for training, with a vocabulary size of 6 (number of ADU types). Based on the results, we decided to use the 2-gram selection language model to generate candidate arguments.
Training: Arrangement Language Model
To train arrangement as described in Section 4.2, we took all arguments of the nine training topics in each experiment. We converted each argument from a sequence of ADUs to a sequence of ADU roles (thesis, pro, and con). After that, we trained a 2-gram and 3-gram language model for each strategy. Table 4 lists the mean perplexity values over the 10 folds.
Here, the perplexity is lower for 3-grams than for 2-grams, which can be expected to yield better performance. Therefore, we used the 3-gram language model to filter the set of candidate arguments.
Training: Phrasing Regression Model
For phrasing (in terms of choosing the best ADU sequence), we first extracted features from each candidate, as described in Section 4.3. Then, we calculated the semantic similarities between each pair of adjacent ADUs as follows:
1. We obtained a 300-dimensional word embedding for each word in an ADU using the pre-trained GloVe common-crawl model (Pennington et al., 2014). 5 2. We averaged the embeddings of all words in an ADU, resulted in one vector representing the ADU.
3. For each adjacent pair of ADUs, we computed the cosine similarity of their vectors. ADUs (i.e., each ADU 2-gram) in logos-oriented arguments and in pathos-oriented arguments. We observe a generally high similarity between neighboring ADUs for both strategies, with logos-oriented 2-grams being slightly more similar on average. Given the ADU 2-grams, we train a linear regression model that predicts the sum of ADU 2-gram probabilities in each argument. In case of the logos strategy, the model has a mean squared error (MSE) of 0.05. In case of pathos the MSE is 0.03.
Results: Argumentation Synthesis
Up to this point, we trained all selection, arrangement, and phrasing models 10 times. Combining the three models for each strategy, we finally generated one argument per strategy for the topic-stance pair left out in each experiments. Hence, we ended up with 10 computationally synthesized arguments per strategy in total.
We evaluate each of these arguments by checking whether it matches any of the 13 humangenerated ground-truth arguments given per topicstance pair. The matching is quantified in terms of n-gram overlap with n = {1, . . . , 5}.
For comparison, we consider a baseline that randomly generates arguments for each topic-stance pair as follows:
1. Select a random thesis unit from t 1 to t 4 .
2. Select a random con unit from c 1 to c 4 .
3. Select three random pro units from p 1 to p 12 . 4. Randomly arrange the selected units. Table 5 presents the accuracy of n-gram overlaps between each of the 13 human-generated arguments per topic-stance pair and the arguments computationally synthesized arguments by our model Table 5: Accuracy of n-gram overlaps between the human-generated arguments for each strategy and the arguments computationally synthesized by our model and the baseline. In the sequential case, the ordering is considered, in the non-sequential case, it is ignored. The better result in each experiment is marked bold, if any.
Strategy ID Argument Computationally Synthesized from Five Argumentative Discourse Units
Logos t4 universities should not charge tuition fees in Germany. c3 to oblige non-academics to finance others' degrees through taxes is not just. p9 this would impede or prevent access to those who are financially weaker. p5 if a university lacks the funds, sponsors must be found. p8 there are other instruments to motivate tighter discipline while studying.
Pathos
p2 education must not be a question of money in a wealthy society such as Germany. c1 one could argue that an increase in tuition fees would allow institutions to be better equipped. p7 studying and taking higher degrees must remain a basic right for everyone. p6 longer durations of studies are costly. t2 the universities in Germany should not under any circumstances charge tuition fees. Table 6: Comparison of two con arguments computationally synthesized with our model for the topic Should all universities in Germany charge tuition fees?, each being a sequence of five ADUs. A logos-oriented argument (t 4 , c 3 , p 9 , p 5 , p 8 ) and a pathos-oriented argument (p 2 , c 1 , p 7 , p 6 , t 2 ). The thesis of each argument is marked bold. and by the baseline, with and without considering the ordering of ADUs. Our models outperform the baseline for 1-grams and 2-grams in all cases. For sequential 3-grams, however, it did not achieve any overlap with the human-generated arguments for either strategy. This may be explained by the fact that the employed selection and phrasing models are based on 2-grams only. For n ≥ 2, the synthesis generally does not work well anymore. We believe that the small data size is a main cause behind this, although it may also point to the limitation of composing ADUs based on surface features. In the non-sequential case, though, our model performs comparably well for 3-grams, and it even manages to correctly synthesize some ADU 4-grams.
In Table 6, we exemplify the top-scored arguments for one topic-stance pair, synthesized by our approach for logos and for pathos respectively. They indicate that our model was able to learn strategy-specific differences. 6 In particular, the logos argument starts with the thesis (t 2 ), as argumentation guidelines suggest. It then reasons based on consequences and alternatives. Matching intu-ition, the pathos argument appeals more to emotion, reflected in phrases such as "wealthy society" and "under any circumstances". Particularly the thesis (t 4 ) has a more intense tonality than t 2 , and putting it at the end creates additional emphasis.
Conclusion
This paper has presented a topic-independent computational approach to imitate the process of selecting, arranging, and phrasing argumentative discourse units (ADUs) -so to speak, to synthesize arguments. We have proposed to operationalize the necessary synthesis knowledge in the form of a combined language and regression model that predicts ADU sequences. So far, we have evaluated our approach on a small dataset only that contains 260 argumentative texts following either of two rhetorical strategies. For a controlled experiment setting based on this data, we have reported preliminary results of medium effectiveness regarding the imitation of human-generated arguments.
A big challenge for the future is to move from such a controlled setting to a real-world scenario, where arguments have to be formed for a freelychosen topic from material that is mined from the web. Still, our topic-independent approach defines a first substantial step in this direction.
( 2 )
2clout (Kacewicz et al., 2014), i.e., the relative social status, confidence, and leadership displayed in a text; (3) authenticity (Newman et al., 2003), i.e., the degree to which people reveal themselves in an authentic way; and (4) emotional tone (Cohn et al., 2004), i.e., negative for values lower than 50 and positive otherwise.
Figure 4 Figure 4 :
44shows a histogram of the distribution of the cosine similarities of each adjacent pair of 5 The used model can be found here: http://nlp. stanford.edu/data/glove.42B.300d.zip. Histogram of the cosine similarity of the average word embeddings of adjacent pairs of ADUs in logos-oriented and in pathos-oriented arguments.
Table 1 :
1The candidate thesis, con, and pro units for one topic-stance pair in the dataset ofWachsmuth et al. (2018).
We consider a single argument to be a sequence of ADUs where each ADU has a specific role: thesis, con, or pro.
The code for running the experiments is available here:https://github.com/webis-de/ inlg19-argumentation-synthesis
A more thorough evaluation of k is left to future work.4 We did not consider 1-grams, because arguments are inherently relational, hence requiring at least two ADUs.
Notice that the coherence of the arguments may be optimized by inserting discourse markers, such as a "but" before p7 in the pathos argument. As stated above, however, this is beyond the scope of the paper at hand.
On Rhetoric: A Theory of Civic Discourse (George A. Kennedy, Translator). Aristotle , Oxford University PressClarendon Aristotle seriesAristotle. 2007. On Rhetoric: A Theory of Civic Dis- course (George A. Kennedy, Translator). Clarendon Aristotle series. Oxford University Press.
Claim synthesis via predicate recycling. Yonatan Bilu, Noam Slonim, 10.18653/v1/P16-2085Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsShort Papers2Association for Computational LinguisticsYonatan Bilu and Noam Slonim. 2016. Claim synthe- sis via predicate recycling. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 2: Short Papers), pages 525-530. Association for Computational Linguis- tics.
Generating and evaluating evaluative arguments. Giuseppe Carenini, Johanna D Moore, Artificial Intelligence. 17011Giuseppe Carenini and Johanna D. Moore. 2006. Gen- erating and evaluating evaluative arguments. Artifi- cial Intelligence, 170(11):925-952.
Identifying attack and support argumentative relations using deep learning. Oana Cocarascu, Francesca Toni, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsOana Cocarascu and Francesca Toni. 2017. Identify- ing attack and support argumentative relations us- ing deep learning. In Proceedings of the 2017 Con- ference on Empirical Methods in Natural Language Processing, pages 1374-1379. Association for Com- putational Linguistics.
Linguistic markers of psychological change surrounding. Matthias R Michael A Cohn, James W Mehl, Pennebaker, Psychological science. 1510Michael A Cohn, Matthias R Mehl, and James W Pen- nebaker. 2004. Linguistic markers of psychological change surrounding September 11, 2001. Psycho- logical science, 15(10):687-693.
Summarising the points made in online political debates. Charlie Egan, Advaith Siddharthan, Adam Wyner, 10.18653/v1/W16-2816Proceedings of the Third Workshop on Argument Mining (ArgMining2016). the Third Workshop on Argument Mining (ArgMining2016)Berlin, GermanyAssociation for Computational LinguisticsCharlie Egan, Advaith Siddharthan, and Adam Wyner. 2016. Summarising the points made in online po- litical debates. In Proceedings of the Third Work- shop on Argument Mining (ArgMining2016), pages 134-143, Berlin, Germany. Association for Compu- tational Linguistics.
Logic and conversation. H , Paul Grice, Syntax and Semantics. Peter Cole and Jerry L. MorganNew YorkAcademic Press3H. Paul Grice. 1975. Logic and conversation. In Peter Cole and Jerry L. Morgan, editors, Syntax and Se- mantics, Vol. 3, pages 41-58. Academic Press, New York.
Which argument is more convincing? Analyzing and predicting convincingness of web arguments using bidirectional LSTM. Ivan Habernal, Iryna Gurevych, 10.18653/v1/P16-1150Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Ivan Habernal and Iryna Gurevych. 2016. Which ar- gument is more convincing? Analyzing and predict- ing convincingness of web arguments using bidirec- tional LSTM. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1589-1599. Association for Computational Linguistics.
Learning to write with cooperative discriminators. Ari Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, Yejin Choi, 1805.06087Technical ReportAri Holtzman, Jan Buys, Maxwell Forbes, Antoine Bosselut, David Golub, and Yejin Choi. 2018. Learning to write with cooperative discriminators. Technical Report 1805.06087, arXiv.
Neural argument generation augmented with externally retrieved evidence. Xinyu Hua, Lu Wang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsLong Papers)Xinyu Hua and Lu Wang. 2018. Neural argument generation augmented with externally retrieved evi- dence. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 219-230. Association for Computational Linguistics.
Pronoun use reflects standings in social hierarchies. Ewa Kacewicz, W James, Matthew Pennebaker, Moongee Davis, Arthur C Jeon, Graesser, Journal of Language and Social Psychology. 332Ewa Kacewicz, James W Pennebaker, Matthew Davis, Moongee Jeon, and Arthur C Graesser. 2014. Pronoun use reflects standings in social hierar- chies. Journal of Language and Social Psychology, 33(2):125-143.
Dave the debater: a retrieval-based and generative argumentative dialogue agent. Cam-Tu Dieu Thu Le, Kim Anh Nguyen, Nguyen, Proceedings of the 5th Workshop on Argument Mining. the 5th Workshop on Argument MiningBrussels, BelgiumAssociation for Computational LinguisticsDieu Thu Le, Cam-Tu Nguyen, and Kim Anh Nguyen. 2018. Dave the debater: a retrieval-based and gen- erative argumentative dialogue agent. In Proceed- ings of the 5th Workshop on Argument Mining, pages 121-130, Brussels, Belgium. Association for Com- putational Linguistics.
Crowdsourcing a word-emotion association lexicon. M Saif, Mohammad, D Peter, Turney, Computational Intelligence. 293Saif M Mohammad and Peter D Turney. 2013. Crowd- sourcing a word-emotion association lexicon. Com- putational Intelligence, 29(3):436-465.
Lying words: Predicting deception from linguistic styles. Personality and social psychology bulletin. James W Matthew L Newman, Diane S Pennebaker, Jane M Berry, Richards, 29Matthew L Newman, James W Pennebaker, Diane S Berry, and Jane M Richards. 2003. Lying words: Predicting deception from linguistic styles. Person- ality and social psychology bulletin, 29(5):665-675.
The effectiveness of lloyd-type methods for the k-means problem. Rafail Ostrovsky, Yuval Rabani, J Leonard, Chaitanya Schulman, Swamy, Journal of the ACM (JACM). 59628Rafail Ostrovsky, Yuval Rabani, Leonard J Schulman, and Chaitanya Swamy. 2012. The effectiveness of lloyd-type methods for the k-means problem. Jour- nal of the ACM (JACM), 59(6):28.
An annotated corpus of argumentative microtexts. Andreas Peldszus, Manfred Stede, Argumentation and Reasoned Action: 1st European Conference on Argumentation (ECA 16). College Publications. Andreas Peldszus and Manfred Stede. 2016. An anno- tated corpus of argumentative microtexts. In Argu- mentation and Reasoned Action: 1st European Con- ference on Argumentation (ECA 16). College Publi- cations.
The Development and Psychometric Properties of LIWC2015. W James, Pennebaker, L Ryan, Kayla Boyd, Kate Jordan, Blackburn, James W Pennebaker, Ryan L Boyd, Kayla Jordan, and Kate Blackburn. 2015. The Development and Psy- chometric Properties of LIWC2015.
When small words foretell academic success: The case of college admissions essays. Cindy K James W Pennebaker, Joey Chung, Frazee, M Gary, David I Lavergne, Beaver, 10.1371/journal.pone.0115844PloS one. 912115844James W Pennebaker, Cindy K Chung, Joey Frazee, Gary M Lavergne, and David I Beaver. 2014. When small words foretell academic success: The case of college admissions essays. PloS one, 9(12):e115844.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Empirical Methods in Natural Language Processing (EMNLP). Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Empirical Methods in Natural Lan- guage Processing (EMNLP), pages 1532-1543.
A general psychoevolutionary theory of emotion. Robert Plutchik, Theories of emotion. ElsevierRobert Plutchik. 1980. A general psychoevolutionary theory of emotion. In Theories of emotion, pages 3-33. Elsevier.
A computational approach for generating Toulmin model argumentation. Paul Reisert, Naoya Inoue, Naoaki Okazaki, Kentaro Inui, 10.3115/v1/W15-0507Proceedings of the 2nd Workshop on Argumentation Mining. the 2nd Workshop on Argumentation MiningAssociation for Computational LinguisticsPaul Reisert, Naoya Inoue, Naoaki Okazaki, and Ken- taro Inui. 2015. A computational approach for gen- erating Toulmin model argumentation. In Proceed- ings of the 2nd Workshop on Argumentation Mining, pages 45-55. Association for Computational Lin- guistics.
End-to-end argument generation system in debating. Misa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshihiko Yanase, Makoto Iwayama, Qinghua Sun, Yoshiki Niwa, Proc. ACL-IJCNLP 2015 System Demonstrations. ACL-IJCNLP 2015 System DemonstrationsMisa Sato, Kohsuke Yanai, Toshinori Miyoshi, Toshi- hiko Yanase, Makoto Iwayama, Qinghua Sun, and Yoshiki Niwa. 2015. End-to-end argument genera- tion system in debating. In Proc. ACL-IJCNLP 2015 System Demonstrations.
Detecting arguing and sentiment in meetings. Swapna Somasundaran, Josef Ruppenhofer, Janyce Wiebe, Proceedings of the SIGdial Workshop on Discourse and Dialogue. the SIGdial Workshop on Discourse and Dialogue6Swapna Somasundaran, Josef Ruppenhofer, and Janyce Wiebe. 2007. Detecting arguing and sentiment in meetings. In Proceedings of the SIGdial Workshop on Discourse and Dialogue, volume 6.
Identifying argumentative discourse structures in persuasive essays. Christian Stab, Iryna Gurevych, 10.3115/v1/D14-1006Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP)Association for Computational LinguisticsChristian Stab and Iryna Gurevych. 2014. Identifying argumentative discourse structures in persuasive es- says. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 46-56. Association for Computa- tional Linguistics.
The psychological meaning of words: LIWC and computerized text analysis methods. R Yla, James W Tausczik, Pennebaker, Journal of language and social psychology. 291Yla R Tausczik and James W Pennebaker. 2010. The psychological meaning of words: LIWC and com- puterized text analysis methods. Journal of lan- guage and social psychology, 29(1):24-54.
The Uses of Argument. Stephen E Toulmin, Cambridge University PressStephen E. Toulmin. 1958. The Uses of Argument. Cambridge University Press.
Argumentation synthesis following rhetorical strategies. Henning Wachsmuth, Manfred Stede, Roxanne El Baff, Khalid Al Khatib, Maria Skeppstedt, Benno Stein, Proceedings of COLING 2018, the 27th International Conference on Computational Linguistics. The COLING 2018 Organizing Committee. COLING 2018, the 27th International Conference on Computational Linguistics. The COLING 2018 Organizing CommitteeTo appearHenning Wachsmuth, Manfred Stede, Roxanne El Baff, Khalid Al Khatib, Maria Skeppstedt, and Benno Stein. 2018. Argumentation synthesis following rhetorical strategies. In Proceedings of COLING 2018, the 27th International Conference on Compu- tational Linguistics. The COLING 2018 Organizing Committee. To appear.
Neural network-based abstract generation for opinions and arguments. Lu Wang, Wang Ling, 10.18653/v1/N16-1007Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAssociation for Computational LinguisticsLu Wang and Wang Ling. 2016. Neural network-based abstract generation for opinions and arguments. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computa- tional Linguistics: Human Language Technologies, pages 47-57. Association for Computational Lin- guistics.
Learning sentence ordering for opinion generation of debate. Toshihiko Yanase, Toshinori Miyoshi, Kohsuke Yanai, Misa Sato, Makoto Iwayama, Yoshiki Niwa, Paul Reisert, Kentaro Inui, 10.3115/v1/W15-0512Proceedings of the 2nd Workshop on Argumentation Mining. the 2nd Workshop on Argumentation MiningAssociation for Computational LinguisticsToshihiko Yanase, Toshinori Miyoshi, Kohsuke Yanai, Misa Sato, Makoto Iwayama, Yoshiki Niwa, Paul Reisert, and Kentaro Inui. 2015. Learning sentence ordering for opinion generation of debate. In Pro- ceedings of the 2nd Workshop on Argumentation Mining, pages 94-103. Association for Computa- tional Linguistics.
Using argumentation strategies in automated argument generation. Ingrid Zukerman, Richard Mcconachy, Kevin B Korb, First International Conference on Natural Language Generation (INLG 00). Ingrid Zukerman, Richard McConachy, and Kevin B. Korb. 2000. Using argumentation strategies in au- tomated argument generation. In First International Conference on Natural Language Generation (INLG 00), pages 55-62. |
248,779,960 | Phone-ing it in: Towards Flexible, Multi-Modal Language Model Training using Phonetic Representations of Data | Multi-modal techniques offer significant untapped potential to unlock improved NLP technology for local languages. However, many advances in language model pre-training are focused on text, a fact that only increases systematic inequalities in the performance of NLP tasks across the world's languages. In this work, we propose a multi-modal approach to train language models using whatever text and/or audio data might be available in a language. Initial experiments using Swahili and Kinyarwanda data suggest the viability of the approach for downstream Named Entity Recognition (NER) tasks, with models pre-trained on phone data showing an improvement of up to 6% F1-score above models that are trained from scratch. Preprocessing and training code will be uploaded to https://github.com/sil-ai/phone-it-in. | [
208117506,
222066984,
225062352,
52124918,
6025595,
18057757,
21719838
] | Phone-ing it in: Towards Flexible, Multi-Modal Language Model Training using Phonetic Representations of Data
Long PapersCopyright Long PapersMay 22-27, 2022
Colin Leong cleong1@udayton.edu
University of Dayton
SIL International
Daniel Whitenack dan_whitenack@sil.org
University of Dayton
SIL International
Phone-ing it in: Towards Flexible, Multi-Modal Language Model Training using Phonetic Representations of Data
Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics
the 60th Annual Meeting of the Association for Computational LinguisticsLong Papers1May 22-27, 2022
Multi-modal techniques offer significant untapped potential to unlock improved NLP technology for local languages. However, many advances in language model pre-training are focused on text, a fact that only increases systematic inequalities in the performance of NLP tasks across the world's languages. In this work, we propose a multi-modal approach to train language models using whatever text and/or audio data might be available in a language. Initial experiments using Swahili and Kinyarwanda data suggest the viability of the approach for downstream Named Entity Recognition (NER) tasks, with models pre-trained on phone data showing an improvement of up to 6% F1-score above models that are trained from scratch. Preprocessing and training code will be uploaded to https://github.com/sil-ai/phone-it-in.
Introduction
Pre-trained language models are increasingly applied in ways that are agnostic to targeted downstream tasks (Brown et al., 2020). This usage has led to a proliferation of large language models trained on enormous amounts of data. For example, the recent Megatron-Turing NLG 530B model was trained on the Pile, which includes 800GB+ of text (Gao et al., 2021), and other large language models utilize large portions of the 200TB+ common crawl data. 1 These large data sets include impressive amounts of text, but all languages are not represented equally (or at all) in that text. The reality is that only a negligible fraction of the 7000+ currently spoken languages (Eberhard et al., 2021) have sufficient text corpora to train state-of-theart language models. This data scarcity results in systematic inequalities in the performance of NLP tasks across the world's languages (Blasi et al., 2021). 1 https://commoncrawl.org/ Local language communities that are working to develop and preserve their languages are producing diverse sets of data beyond pure text. The Bloom Library project, 2 for example, is being used by local language communities to create and translate "shell" or "template" books into many languages (426 languages at the time this paper is being written). However, Bloom allows users to do more than just translate text. Users are also recording audio tracks and sign language videos, which has resulted in 1600+ oral translations. Other examples showing the multi-modal nature of data in local languages include: (i) the creation of ChoCo: a multimodal corpus of the Choctaw language (Brixey and Artstein, 2021); (ii) SIL International's 50+ year effort to document endangered Austronesian languages via text, audio, and video (Quakenbush, 2007); (iii) the grassroots Masakhane effort catalyzing the creation and use of diverse sets of African language data (∀ et al., 2020); and (iv) work with the Me'phaa language of western Mexico that is producing digital recordings (video and audio) along with vocabulary, grammar and texts (Marlett and Weathers, 2018). These diverse data sources are effectively unusable by traditional text-based NLP techniques. In the light of data scarcity on these languages, they offer significant untapped potential to unlock improved NLP technology, if text data can be leveraged along with audio, image and video data. Furthermore, flexible multi-modal technology such as this will make it easier to include diverse people and communities such as those described above within the NLP technology development process -audio-based technology reducing the need for literacy, for example.
In this paper, we propose a multi-modal approach to train both language models and models for downstream NLP tasks using whatever text and/or audio data might be available in a language (or even in a related language). Our method uti-lizes recent advances in phone recognition and text/grapheme-to-phone transliteration to convert input audio and text into a common phonetic representation (the IPA phone inventory). We then pre-train character-based language models in this phone-space. Finally, we fine-tune models for downstream tasks by mapping text-based training data into the phonetic representation. Thus, in addition to flexibility in pre-training, our method provides a way to reuse labeled text data for common NLP tasks, like Named Entity Recognition or Sentiment Analysis, in the context of audio inputs.
We demonstrate our phonetic approach by training Named Entity Recognition (NER) models for Swahili [swh] 3 using various combinations of Swahili text data, Swahili audio data, Kinyarwanda [kin] text data, and Kinyarwanda audio data. These two languages both originate from from the same language family, Bantu, and are spoken by millions of people in Eastern Africa, often within the same country, resulting in some overlap in loan words, etc. 4 However, they are both considered low-resource languages. Kinyarwanda in particular, though spoken by approximately 13-22 million people 5 , has very little text data available in that language, with fewer than 3,000 articles on the Kinyarwanda-language Wikipedia, and Swahili comparatively ahead but still poorly resourced at approximately 68,000 articles, far less than many European languages. 6 , though some datasets have been created such as KINNEWS (Niyongabo et al., 2020). On the other hand, Kinyarwanda is uniquely placed as a language to leverage speech-based technologies, due to well-organized efforts 7 to collect voice data for that language. It is in fact one of the largest subsets available on the Common Voice Dataset (Ardila et al., 2019), with 1,183 hours of voice clips collected and validated. Choosing these two languages allowed us to test the use of the technique on legitimately low-resourced languages that could benefit from improved NLP technology, and which as part of the same family of languages We find that simple NER models, which just look for the presence or absence of entities, can be trained on small amounts of data (around 2000 samples) in the phonetic representation. Models trained for complicated NER tasks in the phonetic representation, which look for entities and their locations within a sequence, are improved (by up to 6+% in F1 score) through pre-training a phonetic language model using a combination of text and audio data. We see this improvement when finetuning either a Swahili or Kinyarwanda language model for downstream Swahili tasks, which implies that one could make use of text and audio data in related languages to boost phonetic language model performance. The utility of the method in data scarce scenarios and importance of pre-training depends on the complexity of the downstream task.
Related Work
There have been a series of attempts to utilize phonetic representations of language to improve or extend automatic speech recognition (ASR) models. Some of these jointly model text and audio data using sequences of phonemes combined with sequences of text characters. Sundararaman et al. (2021), for example, uses a joint transformer architecture that encodes sequences of phonemes and sequences of text simultaneously. However, this joint model is utilized to learn representations that are more robust to transcription errors. The architecture still requires text inputs (from ASR transcriptions) and generates outputs in both text and phoneme representations. In contrast, our approach allows for text input, audio input, or text plus audio input to language models.
Similarly, in (Chaudhary et al., 2018) and (Bharadwaj et al., 2016) investigate the potential of phoneme-based or phoneme aware representations and models, showing gains in performance, language transfer, and flexibility across written scripts. These works conduct training on text-based data only, using Epitran to convert to phonemes. Baevski et al. (2021) transforms unlabeled text (i.e., not aligned with corresponding audio files) into phonemes in a scheme to train speech recognition models without any labeled data. This scheme involves a generator model trained jointly with a discriminator model. The generator model converts audio, segmented into phonetic units into predicted phonemes, and the discriminator model attempts to discriminate between these predicted phonemes and the phonemes transliterated from unlabeled text. Although both text and audio are utilized in this work, they are not input to the same model and the primary output of the training scheme is a model that creates good phonetic speech representations from input audio.
Outside of speech recognition focused work, Shen et al. (2020) (and other researchers cited therein) attempt to "fuse" audio and text at the word level for emotion recognition. They introduce another architecture that internally represents both audio and text. However, the so-called WISE framework relies on speech recognition to generate the text corresponding to audio frames in real-time. The current work explicitly avoids reliance on speech recognition. The 2021 Multimodal Sentiment Analysis (MuSe) challenge continues this vein of research integrating audio, video, text, and physiology data in an emotion recognition task (Stappen et al., 2021). Contributions to this challenge, such as Vlasenko et al. (2021), introduce a variety of ways to "fuse" audio and text inputs. However, these contributions are squarely focused on emotion/sentiment analysis and do not propose methods for flexible, phonetic language models. Lakhotia et al. (2021) introduced functionality for "textless" NLP. They explored the possibility of creating a dialogue system from only audio inputs (i.e., without text). As part of this system, language models are directly trained on audio units without any text. This advances the state-of-the-art with regard to self-supervised speech methods, but it does not provide the flexibility in audio and/or text language modeling introduced here.
Methodology
Our approach is inspired by the fact that many languages are primarily oral, with writing systems that represent spoken sounds. We convert both text and audio into single common representation of sounds, or "phones," represented using the International Phonetic Alphabet, or IPA. Then, we perform both language model pre-training and the training of models for downstream tasks in this phonetic representation. Well-tested architectures, such as BERT-style transformer models (Vaswani et al., 2017), are thus flexibly extended to either speech or audio data.
Regarding the conversion process of text and audio data, we leverage recent advances to transliterate this data into corresponding sounds represented by IPA phonetic symbols. This transliteration is possible for speech/audio data using tools such as the Allosaurus universal phone recognizer, which can be applied without additional training to any language , though it can benefit from fine-tuning (Siminyu et al., 2021). To convert text data to phonemes we can use tools such as the Epitran grapheme-to-phoneme converter , which is specifically designed to provide precise phonetic transliterations in low-resource scenarios. Fig. 1 shows how downstream models for certain NLP tasks, like Named Entity Recognition (NER), are performed in the phonetic representation. Labeled data sets for NLP tasks need to be mapped or encoded into the phonetic representation to train downstream models. However, once this mapping is accomplished, models trained in the phonetic representation can perform tasks with audio input that are typically restricted to processing text input.
Phonetic Language Modeling
One complication arising from direct speech-tophone transcription is the loss of word boundaries in the transcription. This is expected, as natural speech does not put any pauses between the words in an utterance. This does, however, result in mixing text data sets containing clear word boundaries with speech data sets containing no clear word boundaries.
Borrowing from techniques used on languages that do not indicate word boundaries by the use of whitespace, we address the problem by removing all whitespace from our data sets after phone transliteration. We train character-based language models over the resulting data. Character-based models such as CharFormer (Tay et al., 2021) or ByT5 (Xue et al., 2021) have shown promise in recent years for language modeling, even if this approach is known to have some trade offs related to shorter context windows.
Potential Information Losses
The transliteration of text and audio data into phonetic representations presents several other challenges related to potential loss of information or injection of noise: Figure 1: Our approach: input from either modality can be converted by phone recognition, e.g. Epitran for text, Allosaurus for speech. Then we test on several downstream tasks which we designate NER1, NER2, NER3.
1. Loss of suprasegmental information: In some languages, meaning may be encoded through tones, or pitch changes across sounds (aka across segments, or "suprasegmental"). Particularly for tonal languages such as Mandarin Chinese [cmn], this loss can represent a significant informational loss particularly for homophones with different tones, as seen in (Amrhein and Sennrich, 2020). While IPA symbols can represent these intricacies, it adds complexity 2. Phone/phoneme differences: As noted in , speech sounds which are physically different (different phones), may be perceived as the same (one phoneme) by speakers of one language, but these same sounds could perhaps be distinguished by speakers of another language. For example, the French words words bouche, and bûche contain phones (/u/ vs. /y/) which may sound "the same" to English speakers, but are semantically different to French speakers. In other words, in English, both phones map to the same phoneme perceptually. As the Allosaurus phone recognizer recognizes the actual phones/sounds, not their perceived phonemes, it would transcribe these two phones to different representations even for English speech. This can be mitigated to an extent by customizing the output of Allosaurus on a per-language basis, see Sec. 4.3.
3. Simple errors in phone recognition: As noted in (Siminyu et al., 2021), even the best-trained Allosaurus models, fine-tuned on languagespecific data, have a non-trivial Phone Error Rate (PER).
An important question, therefore, is whether these added sources of noise/information losses are outweighed by the potential benefits in terms of flexibility. Does working in a phonetic representation cause a prohibitive amount of information loss? We constructed our experiments and data sets in order to answer this question.
Experiments
In order to evaluate the quality of learned phonetic representations, we transliterate several text and audio data sets in the Swahili [swh] language. We pre-train phonetic language models on various combinations of these data sets and evaluate downstream performance on NER tasks. See Fig. 2 for a detailed overview of these various combinations.
We refer to these combinations as denoted by downstream tasks (SNER for Swahili NER), and pre-training language ((K for Kinyarwanda, S for Swahili) as well as data modality (T for text, A for audio). By way of example, the SNER+ST2 model results from pre-training using 2 swh text datasets (ST2) and fine-tuning on the swh NER (SNER) task, whereas the SNER+SAT model results from pre-training using swh audio and text data (SAT).
Kinyarwanda [kin] data is used in our experiments as a language related to the target language (swh) with existing text and audio resources that, in some ways, surpasses those available in the target language. Thus, we pre-train some models on kin data while fine-tuning for the downstream NER task using swh data.
Three different formulations of the NER task, from more simple (NER1) to more compli- cated/granular (NER3), are used (see Fig. 2) to help determine the applicability of our methods to less challenging (NER1) to more challenging (NER3) tasks. The NER1 task tries to determine the presence or absence of certain kinds of entities within an input. For our task we use PER, ORG, DATE, and LOC entities. The NER2 task additionally requires models to predict the correct numbers of these entities within an input. Finally, the NER3 task requires models to determine entities at the correct locations with an input sequence of phones.
For all of these tasks, we first convert text data to phones using Epitran and audio data to phones using Allosaurus. Then, we pre-train on various combinations of data, before fine-tuning on NER.
Data Sources
For swh pre-training data we use: (i) the "Language Modeling Data for Swahili" dataset (Shikali and Refuoe, 2019) hosted on Hugging Face (which we refer to as the "HF Swahili" data set); and (ii) the ALFFA speech dataset (Gelas et al., 2012). For ALFFA data we process both the audio files (using Allosaurus) and the original "gold" text transcriptions (using Epitran).
For Kinyarwanda pre-training data, we use the Common Voice (CV) Kinyarwanda 6.1 subset (Ardila et al., 2019). Again, we utilize both the audio files and transcriptions. Due to the large size of the CV 6.1 Kinyarwanda subset, we processed only about 80% of the audio files.
For fine-tuning the downstream NER task, we use the MasakhaNER data set (Adelani et al., 2021). As with other text-based data sets, we transform the NER sample with Epitran to map the samples into the phonetic representation.
Entity to Phone Encoding
For the downstream NER tasks we map or encode the NER annotations into the phonetic representation. We thus edited the labels (PER, ORG, DATE, and LOC) to convert them from word-level labels to phone-level labels as shown in Fig. 3. Unlike (Kuru et al., 2016), we leave in the B-and I-prefixes.
Our fork of the MasakhaNER data set, which implements our phonetic representations of the labels, is published on Github. 8 .
Phone Inventory Considerations
As mentioned already, we use Allosaurus for phone recognition with audio inputs. In order to ensure consistency with Epitran, we took advantage of Allosaurus's inventory customization feature, giving it the phone inventories specified by the same language in Epitran. The inventory used throughout this work (for swh) is the swa-Latn inventory from Epitran. 9 When this inventory is supplied as input, Allosaurus will only output symbols from the inventory. We followed similar practice when transliterating Kinyarwanda data.
We compare the output of Epitran and Allosaurus on the ALFFA dataset. Following the practice of , we used the editdistance 10 library to calculate the Phone Error Rate (PER). Having no ground truth phone annotations, we instead take Epitran's outputs as "ground truth" for comparison. The mean PER between the outputs is 23.7%. This result is consistent with Siminyu et al. (2021), which finds PERs as high as 72.8% when testing on on the Bukusu (bxk), Saamia (lsm) and East Tusom languages (an endangered subdialect of the Tungkhulic language family). However, by training the phone recognizer on even minimal amounts of data in these languages, PERs were improved significantly.
A spreadsheet with detailed results for 10k samples from ALFFA can be found online. 11
Model Architecture and Training
All models use the SHIBA implementation of CA-NINE (Tanner and Hagiwara, 2021). SHIBA was designed for use on the Japanese [jpn] language, which does not include spaces between its characters (similar to our phonetic representations without word boundaries). We used the default hyperparameter settings for SHIBA pre-training and finetuning, because we are primarily concerned with the relative impact of various combinations of pretraining data on the downstream NER tasks. We use the Hugging Face transformers library (Wolf et al., 2020) to train all models.
Because of the small size of the NER data set used during fine-tuning, we enabled Hugging Face's early stopping callback for all downstream training runs. We stopped these runs if they did not improve training loss after 20 evaluations. Nonetheless, we found after a number of trials that the models quickly overfit using this setting. We also experimented with modifying this on several trials to stop based on the evaluation loss instead, but this change did not significantly influence the evaluation results.
Following the example of Adelani et al. (2021), we do not run downstream model trainings once, but multiple times. We also pre-trained each phonetic language model multiple times with different random seeds. We report averages of these multiple trials in the following.
Scripts and code for our experiments will be uploaded to Github. 12 Table 1 presents the F1 scores for our training scenarios in the downstream NER1 and NER2 tasks. The models that utilize pre-training on the kin audio and text data give the best results. However, pre-training does not appear to dramatically influence the level. F1 scores in the range of 74-85% suggests the minimum viability of these phonetic models for simple NLP tasks. Table 2 presents the F1 scores for our various training scenarios in the downstream NER3 task, which should be the most challenging for our phonetic models. The influence of pre-training is more noticeable for this task. Further, the models pretrained on the kin audio and text data have the best performance. This is likely due to the fact that the kin data is both large and higher quality (in terms of sound quality) as compared to the ALFFA Swahili data. This benefit of this data size and quality appears to outweigh any degradation due to the pre-training occurring in a different (although related) language. Table 2: Prediction of entity types and precise locations (NER3). Average of at least three trials per experiment, scores calculated with seqeval library. (Nakayama, 2018) The importance (or relative impact) of pretraining phonetic language models increases with the complexity of the NER task. Fig. 4 shows the maximum percentage improvement due to pretraining for each of our NER tasks. This suggests that simple NLP tasks with a small number of output classes are much easier to port to phonetic representations, even without pre-training, while more complicated NLP tasks may require a more significant amount of text and/or audio data for pretraining. We expect this trend to carry through to tasks like sentiment analysis, which could be formulated as a simple classification task with NEG, NEU, and POS sentiment labels or a more complicated aspect based sentiment analysis task.
Results and Discussion
Model
Conclusions and Further Work
The proposed method for multi-modal training using phonetic representations of data has minimum viability for simple NER tasks. For more complicated NER tasks, pre-training phonetic language models boosts downstream model performance by up to 6% in F1 scores. This pre-training can be Figure 4: The max percentage improvement with finetuning for each kind of NER task that was explored. Presence/absence of entity types (NER1), presence and count of entity types (NER2), and prediction of entity types and precise locations (NER3). performed in the target language or in a related language using text and/or audio data. Thus, the method provides flexibility in the data needed to train language models, while also allowing for audio and/or text inputs to models trained on downstream NLP tasks.
We anticipate exploring various extensions to and validations of this method in the future. Specifically, we would like to explore methods that might mitigate performance degradation due to a lack of word boundaries in our method. Subword tokenization techniques, such as Byte-Pair Encodings (BPE) (Sennrich et al., 2016;Gage, 1994), or character-based word segmentation techniques might help in detecting and exploiting repeating patterns within the phonetic representation. Furthermore, the word embedding techniques used by (Chaudhary et al., 2018) or (Bharadwaj et al., 2016) have been shown to work well, and would be worth investigating how the removal of spacedelimited word boundaries would affect this.
We would also like to validate our methods on a variety of other data sets and tasks. We selected the MasakhaNER dataset for evaluation because we specifically wished to evaluate results on ac-tual low-resource languages supported by both Allosaurus and Epitran. While there are still, we argue, detectable improvements in downstream results with our method, further work would benefit from additional evaluations on other data sets or tasks. In particular, the Swahili News Classification corpus (David, 2020) corpus may provide a useful evaluation.
We did not investigate going from audio to phones, then phones to words/characters, judging that information losses and errors would likely compound in multiple stages of processing. Instead, we focused on what could be achieved with the Allosaurus "universal phone transcriber" without any language-specific finetuning. A truly universal transcriber would increase flexibility when training for truly low-resource scenarios.
Nevertheless, it has been shown by Siminyu et al. (2021) that it is possible to improve phone recognition with even small amounts (approximately 100 sentences) of annotation. It may be possible to improve phonetic language modeling results by performing this fine-tuning in the target language.
Experiments involving other languages with, e.g. languages that are not related would help to isolate the role of relatedness, lexical overlap, or related sound systems/phonology.
While we do not claim that conversion to phones provides better performance generally, we believe that our experiments show that the fundamental idea of converting either text or audio data to the common phone representation provides a viable path to more flexible approach to certain downstream NLP tasks, worthy of further development. discussions, assistance in debugging, and time spent in proofreading. In addition, David Adelani and the Masakhane community provided invaluable help, encouragement and assistance with the MasakhaNER dataset.
We used GNU Parallel for much of the dataset processing (Tange, 2011). In combination with Lhoest et al. (2021) from Hugging Face, GNU Parallel significantly accelerated pre-processing and phone transcription.
ClearML (AI, 2019) provided experiment track-ing, model and dataset management, and (when needed) prompt and helpful technical support. As our project involved the creation of over 20 distinct dataset variations and training many models on some of them, these management tools significantly eased the entire research process.
Ethics Statement
This research project uses open datasets and models, which are used in accordance with corresponding licenses to the best of our knowledge. For the downstream task in question (NER), we used the MasakhaNER dataset, which is constructed from newspaper data. Where this newspaper data includes mentions of individuals, the individuals are public figures. The domain of this NER data is limited to the newspaper/news domain, which should be kept in mind while considering the applicability of the methods presented.
In terms of compute, the work presented here required approximately 200 pre-training or finetuning jobs tracked via ClearML. Each run lasted no more than 1-2 hoursfor finetuning, but generally much longer for pretraining (on the order of a day), and only consumed one GPU resource at a time (either an A100 or P100). This computation sums up to around 5-6 GPU-weeks on the A100, about one gpu-week on the Titan RTX, and several compute-days each for the other GPUs. Additional exploratory work and debugging consumed another few GPU-days on Google Colab.
Figure 2 :
2Training scenarios: we pre-train on various combinations of phonemized datasets, evaluating on the downstream NER task. SNER-ST denotes "Swahili Text (ST) pre-training, Swahili NER (SNER) fine-tuning", SNER-SAT denotes Swahili NER with Swahili Audio and Text (SAT) pre-training, SNER-KA uses Kinyarwanda Audio (KA), etc.
Figure 3 :
3Adaptation of word-level NER annotations to character-level annotations.
https://bloomlibrary.org/
https://github.com/cdleong/masakhane-ner
https://bit.ly/30f8YCI 10 https://github.com/roy-ht/editdistance 11 https://bit.ly/3F0is3t
https://github.com/sil-ai/phone-it-in
AcknowledgementsThe authors wish to thank Dr. Vijayan Asari, Dr. Steven Rogers, Dr. Julia Kreutzer, Dr. Graham Neubig, Dr. David Mortenson, Andre Niyongabo Rubungo, and Joshua Turner for advice, helpful
Sebastian Ruder. D Adelani, Jade Z Abbott, Graham Neubig, Julia Daniel D'souza, Constantine Kreutzer, Chester Lignos, Happy Palen-Michel, Shruti Buzaaba, Rijhwani, Stephen Mayhew, Israel Abebe Azime, Shamsuddeen Hassan Muhammad, Chris CD. Adelani, Jade Z. Abbott, Graham Neubig, Daniel D'souza, Julia Kreutzer, Constantine Lignos, Chester Palen-Michel, Happy Buzaaba, Shruti Rijhwani, Se- bastian Ruder, Stephen Mayhew, Israel Abebe Az- ime, Shamsuddeen Hassan Muhammad, Chris C.
. Joyce Emezue, Perez Nakatumba-Nabende, Anuoluwapo Ogayo, Catherine Aremu, Derguene Gitau, J Mbaye, Alabi, Tajuddeen R Seid Muhie Yimam, Ignatius Gwadabe, Rubungo Andre Ezeani, Jonathan Niyongabo, Mukiibi, A Verrah, Iroro Otiende, Davis Orife, Samba David, Ngom, P Tosin, Paul Adewumi, Rayson, Dibora Gebreyohannes. Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Ijeoma Chukwuneke, Nkiruka Bridget Odu, Eric Peter Wairagala, S. Ajiboye Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane MboupOrevaoghene Ahia, Bonaventure F. P. DossouEmezue, Joyce Nakatumba-Nabende, Perez Ogayo, Anuoluwapo Aremu, Catherine Gitau, Derguene Mbaye, J. Alabi, Seid Muhie Yimam, Tajuddeen R. Gwadabe, Ignatius Ezeani, Rubungo Andre Niy- ongabo, Jonathan Mukiibi, Verrah A Otiende, Iroro Orife, Davis David, Samba Ngom, Tosin P. Adewumi, Paul Rayson, Mofetoluwa Adeyemi, Gerald Muriuki, Emmanuel Anebi, Chiamaka Ijeoma Chukwuneke, Nkiruka Bridget Odu, Eric Peter Wairagala, S. Aji- boye Oyerinde, Clemencia Siro, Tobius Saul Bateesa, Temilola Oloyede, Yvonne Wambui, Victor Akinode, Deborah Nabagereka, Maurice Katusiime, Ayodele Awokoya, Mouhamadane Mboup, Dibora Gebrey- ohannes, Henok Tilaye, Kelechi Nwaike, Degaga Wolde, Abdoulaye N Faye, Blessing Sibanda, Ore- vaoghene Ahia, Bonaventure F. P. Dossou, Kelechi
Adewale Akinfaderin, Tendai Munyaradzi Marengereke, and Salomey Osei. Thierno Ogueji, Abdoulaye Ibrahima Diop, Diallo, Transactions of the Association for Computational Linguistics. 9MasakhaNER: Named Entity Recognition for African LanguagesOgueji, Thierno Ibrahima Diop, Abdoulaye Diallo, Adewale Akinfaderin, Tendai Munyaradzi Maren- gereke, and Salomey Osei. 2021. MasakhaNER: Named Entity Recognition for African Languages. Transactions of the Association for Computational Linguistics, 9:1116-1131.
Clearml -your entire mlops stack in one open-source tool. A I Allegro, Allegro AI. 2019. Clearml -your entire mlops stack in one open-source tool. Software available from http://github.com/allegroai/clearml.
On Romanization for model transfer between scripts in neural machine translation. Chantal Amrhein, Rico Sennrich, 10.18653/v1/2020.findings-emnlp.223Findings of the Association for Computational Linguistics: EMNLP 2020. Online. Association for Computational LinguisticsChantal Amrhein and Rico Sennrich. 2020. On Roman- ization for model transfer between scripts in neural machine translation. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 2461-2469, Online. Association for Computational Linguistics.
Common voice: A massivelymultilingual speech corpus. Rosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M Tyers, Gregor Weber, abs/1912.06670CoRRRosana Ardila, Megan Branson, Kelly Davis, Michael Henretty, Michael Kohler, Josh Meyer, Reuben Morais, Lindsay Saunders, Francis M. Tyers, and Gregor Weber. 2019. Common voice: A massively- multilingual speech corpus. CoRR, abs/1912.06670.
Unsupervised speech recognition. Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, Michael Auli, abs/2105.11084ArXiv. Alexei Baevski, Wei-Ning Hsu, Alexis Conneau, and Michael Auli. 2021. Unsupervised speech recogni- tion. ArXiv, abs/2105.11084.
Phonologically aware neural model for named entity recognition in low resource transfer settings. Akash Bharadwaj, David Mortensen, Chris Dyer, Jaime Carbonell, 10.18653/v1/D16-1153Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAustin, TexasAssociation for Computational LinguisticsAkash Bharadwaj, David Mortensen, Chris Dyer, and Jaime Carbonell. 2016. Phonologically aware neural model for named entity recognition in low resource transfer settings. In Proceedings of the 2016 Con- ference on Empirical Methods in Natural Language Processing, pages 1462-1472, Austin, Texas. Associ- ation for Computational Linguistics.
Antonios Anastasopoulos, and Graham Neubig. 2021. Systematic inequalities in language technology performance across the world's languages. ArXiv. E Damián, Blasi, abs/2110.06733Damián E. Blasi, Antonios Anastasopoulos, and Gra- ham Neubig. 2021. Systematic inequalities in lan- guage technology performance across the world's languages. ArXiv, abs/2110.06733.
Choco: a multimodal corpus of the choctaw language. Language Resources and Evaluation. Jacqueline Brixey, Ron Artstein, 55Jacqueline Brixey and Ron Artstein. 2021. Choco: a multimodal corpus of the choctaw language. Lan- guage Resources and Evaluation, 55:241-257.
Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Mateusz Litwin, Advances in Neural Information Processing Systems. Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec RadfordCurran Associates, Inc33Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, Chris Hesse, Mark Chen, Eric Sigler, Ma- teusz Litwin, Scott Gray, Benjamin Chess, Jack Clark, Christopher Berner, Sam McCandlish, Alec Radford, Ilya Sutskever, and Dario Amodei. 2020. Language models are few-shot learners. In Ad- vances in Neural Information Processing Systems, volume 33, pages 1877-1901. Curran Associates, Inc.
Adapting word embeddings to new languages with morphological and phonological subword representations. Aditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R Mortensen, Jaime Carbonell, 10.18653/v1/D18-1366Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAditi Chaudhary, Chunting Zhou, Lori Levin, Graham Neubig, David R. Mortensen, and Jaime Carbonell. 2018. Adapting word embeddings to new languages with morphological and phonological subword repre- sentations. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 3285-3295, Brussels, Belgium. Association for Computational Linguistics.
Swahili : News classification dataset. The news version contains both train and test sets. Davis David, 10.5281/zenodo.5514203Davis David. 2020. Swahili : News classification dataset. The news version contains both train and test sets.
Ethnologue: Languages of the World, twenty-fourth edition. David M Eberhard, Gary F Simons, Charles D Fennig, SIL International. David M. Eberhard, Gary F. Simons, and Charles D. Fennig. 2021. Ethnologue: Languages of the World, twenty-fourth edition. SIL International, Dallas, Texas.
∀ , Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Tajudeen Kolawole, Taiwo Fagbohungbe, Shamsuddee Hassan Solomon Oluwole Akinola, Salomon Muhammad, Kabongo, Salomey Osei, and others. 2020. Participatory research for low-resourced machine translation: A case study in african languages. Findings of EMNLP. ∀, Wilhelmina Nekoto, Vukosi Marivate, Tshinondiwa Matsila, Timi Fasubaa, Tajudeen Kolawole, Taiwo Fagbohungbe, Solomon Oluwole Akinola, Sham- suddee Hassan Muhammad, Salomon Kabongo, Sa- lomey Osei, and others. 2020. Participatory research for low-resourced machine translation: A case study in african languages. Findings of EMNLP.
A new algorithm for data compression. Philip Gage, The C Users Journal archive. 12Philip Gage. 1994. A new algorithm for data compres- sion. The C Users Journal archive, 12:23-38.
Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling. Leo Gao, Stella Rose Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, abs/2101.00027ArXiv. Leo Gao, Stella Rose Biderman, Sid Black, Laurence Golding, Travis Hoppe, Charles Foster, Jason Phang, Horace He, Anish Thite, Noa Nabeshima, Shawn Presser, and Connor Leahy. 2021. The pile: An 800gb dataset of diverse text for language modeling. ArXiv, abs/2101.00027.
Developments of Swahili resources for an automatic speech recognition system. Hadrien Gelas, Laurent Besacier, Francois Pellegrino, SLTU -Workshop on Spoken Language Technologies for Under-Resourced Languages. Cape-Town, Afrique Du SudHadrien Gelas, Laurent Besacier, and Francois Pelle- grino. 2012. Developments of Swahili resources for an automatic speech recognition system. In SLTU -Workshop on Spoken Language Technologies for Under-Resourced Languages, Cape-Town, Afrique Du Sud.
Aspects of deceptive cognate derived loanwords in kinyarwanda. Jacques Lwaboshi Kayigema, Davie Elias Mutasa, 10.1080/02572117.2020.1804224South African Journal of African Languages. 412Jacques Lwaboshi Kayigema and Davie Elias Mutasa. 2021. Aspects of deceptive cognate derived loan- words in kinyarwanda. South African Journal of African Languages, 41(2):113-122.
CharNER: Character-level named entity recognition. Onur Kuru, Deniz Ozan Arkan Can, Yuret, Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers. COLING 2016, the 26th International Conference on Computational Linguistics: Technical PapersOsaka, JapanThe COLING 2016 Organizing CommitteeOnur Kuru, Ozan Arkan Can, and Deniz Yuret. 2016. CharNER: Character-level named entity recognition. In Proceedings of COLING 2016, the 26th Inter- national Conference on Computational Linguistics: Technical Papers, pages 911-921, Osaka, Japan. The COLING 2016 Organizing Committee.
Adel Ben Mohamed, and Emmanuel Dupoux. 2021. Generative spoken language modeling from raw audio. Kushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu Nguyen, Jade Copet, Alexei Baevski, abs/2102.01192ArXiv. Kushal Lakhotia, Evgeny Kharitonov, Wei-Ning Hsu, Yossi Adi, Adam Polyak, Benjamin Bolte, Tu Nguyen, Jade Copet, Alexei Baevski, Adel Ben Mohamed, and Emmanuel Dupoux. 2021. Gener- ative spoken language modeling from raw audio. ArXiv, abs/2102.01192.
Datasets: A community library for natural language processing. Quentin Lhoest, Albert Villanova Del Moral, Yacine Jernite, Abhishek Thakur, Suraj Patrick Von Platen, Julien Patil, Mariama Chaumond, Julien Drame, Lewis Plu, Joe Tunstall, Mario Davison, Gunjan Šaško, Bhavitvya Chhablani, Simon Malik, Teven Le Brandeis, Victor Scao, Canwen Sanh, Nicolas Xu, Angelina Patry, Philipp Mcmillan-Major, Sylvain Schmid, Clément Gugger, Delangue, Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2021 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsLysandre Debut, Stas Bekman, Pierric Cistac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf; Dominican RepublicAssociation for Computational LinguisticsOnline and Punta CanaQuentin Lhoest, Albert Villanova del Moral, Yacine Jernite, Abhishek Thakur, Patrick von Platen, Suraj Patil, Julien Chaumond, Mariama Drame, Julien Plu, Lewis Tunstall, Joe Davison, Mario Šaško, Gun- jan Chhablani, Bhavitvya Malik, Simon Brandeis, Teven Le Scao, Victor Sanh, Canwen Xu, Nicolas Patry, Angelina McMillan-Major, Philipp Schmid, Sylvain Gugger, Clément Delangue, Théo Matus- sière, Lysandre Debut, Stas Bekman, Pierric Cis- tac, Thibault Goehringer, Victor Mustar, François Lagunas, Alexander Rush, and Thomas Wolf. 2021. Datasets: A community library for natural language processing. In Proceedings of the 2021 Conference on Empirical Methods in Natural Language Process- ing: System Demonstrations, pages 175-184, Online and Punta Cana, Dominican Republic. Association for Computational Linguistics.
Universal phone recognition with a multilingual allophone system. Xinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anastasopoulos, Graham David R Mortensen, Alan W Neubig, Metze Black, Florian, ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEEXinjian Li, Siddharth Dalmia, Juncheng Li, Matthew Lee, Patrick Littell, Jiali Yao, Antonios Anastasopou- los, David R Mortensen, Graham Neubig, Alan W Black, and Metze Florian. 2020. Universal phone recognition with a multilingual allophone system. In ICASSP 2020-2020 IEEE International Confer- ence on Acoustics, Speech and Signal Processing (ICASSP), pages 8249-8253. IEEE.
The sounds of me'phaa (tlapanec): A new assessment. SIL-Mexico Electronic Working Papers. A Stephen, Mark L Marlett, Weathers, 25Stephen A. Marlett and Mark L. Weathers. 2018. The sounds of me'phaa (tlapanec): A new assessment. SIL-Mexico Electronic Working Papers, 25.
Epitran: Precision G2P for many languages. David R Mortensen, Siddharth Dalmia, Patrick Littell, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). the Eleventh International Conference on Language Resources and Evaluation (LREC 2018)Paris, FranceEuropean Language Resources Association (ELRADavid R. Mortensen, Siddharth Dalmia, and Patrick Littell. 2018. Epitran: Precision G2P for many lan- guages. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018), Paris, France. European Language Re- sources Association (ELRA).
2018. seqeval: A python framework for sequence labeling evaluation. Hiroki Nakayama, Hiroki Nakayama. 2018. seqeval: A python framework for sequence labeling evaluation. Software available from https://github.com/chakki-works/seqeval.
KINNEWS and KIRNEWS: Benchmarking cross-lingual text classification for Kinyarwanda and Kirundi. Qu Rubungo Andre Niyongabo, Julia Hong, Li Kreutzer, Huang, 10.18653/v1/2020.coling-main.480Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainOnline. International Committee on Computational LinguisticsRubungo Andre Niyongabo, Qu Hong, Julia Kreutzer, and Li Huang. 2020. KINNEWS and KIRNEWS: Benchmarking cross-lingual text classification for Kinyarwanda and Kirundi. In Proceedings of the 28th International Conference on Computational Lin- guistics, pages 5507-5521, Barcelona, Spain (On- line). International Committee on Computational Lin- guistics.
Scikit-learn: Machine learning in python. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, Journal of machine learning research. 12Fabian Pedregosa, Gaël Varoquaux, Alexandre Gram- fort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vin- cent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. Journal of machine learning re- search, 12(Oct):2825-2830.
Chapter 4. sil international and endangered austronesian languages. J S Quakenbush, LD&C Special Publication No. 1: Documenting and Revitalizing Austronesian Languages. J. S. Quakenbush. 2007. Chapter 4. sil international and endangered austronesian languages. In LD&C Special Publication No. 1: Documenting and Revital- izing Austronesian Languages.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In Proceedings of the 54th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 1715-1725, Berlin, Germany. Association for Computational Lin- guistics.
Wise: Word-level interaction-based multimodal fusion for speech emotion recognition. Guanghu Shen, Riwei Lai, Rui Chen, Yu Zhang, Kejia Zhang, Qilong Han, Hongtao Song, INTERSPEECH. Guanghu Shen, Riwei Lai, Rui Chen, Yu Zhang, Kejia Zhang, Qilong Han, and Hongtao Song. 2020. Wise: Word-level interaction-based multimodal fusion for speech emotion recognition. In INTERSPEECH.
Language modeling data for Swahili. Type: dataset. Casper Shivachi, Mokhosi Shikali, Refuoe, 10.5281/zenodo.3553423Shivachi Casper Shikali and Mokhosi Refuoe. 2019. Language modeling data for Swahili. Type: dataset.
Phoneme recognition through fine tuning of phonetic representations: a case study on luhya language varieties. Kathleen Siminyu, Xinjian Li, Antonios Anastasopoulos, David Mortensen, Michael R Marlo, Graham Neubig, Kathleen Siminyu, Xinjian Li, Antonios Anastasopou- los, David Mortensen, Michael R. Marlo, and Gra- ham Neubig. 2021. Phoneme recognition through fine tuning of phonetic representations: a case study on luhya language varieties.
The multimodal sentiment analysis in car reviews (muse-car) dataset: Collection, insights and improvements. Lukas Stappen, Alice Baird, Lea Schumann, Björn W Schuller, abs/2101.06053ArXiv. Lukas Stappen, Alice Baird, Lea Schumann, and Björn W. Schuller. 2021. The multimodal sentiment analysis in car reviews (muse-car) dataset: Collection, insights and improvements. ArXiv, abs/2101.06053.
Phoneme-bert: Joint language modelling of phoneme sequence and asr transcript. Ayush Mukuntha Narayanan Sundararaman, Jithendra Kumar, Vepa, abs/2102.00804ArXiv. Mukuntha Narayanan Sundararaman, Ayush Kumar, and Jithendra Vepa. 2021. Phoneme-bert: Joint lan- guage modelling of phoneme sequence and asr tran- script. ArXiv, abs/2102.00804.
Gnu parallel -the command-line power tool. ;login: The USENIX Magazine. O Tange, 36O. Tange. 2011. Gnu parallel -the command-line power tool. ;login: The USENIX Magazine, 36(1):42-47.
Joshua Tanner, Masato Hagiwara, SHIBA: Japanese CANINE model. Publication Title: GitHub repository. Joshua Tanner and Masato Hagiwara. 2021. SHIBA: Japanese CANINE model. Publication Title: GitHub repository.
Charformer: Fast character transformers via gradientbased subword tokenization. Yi Tay, Vinh Tran, Sebastian Ruder, Jai Gupta, Hyung Won, Dara Chung, Zhen Bahri, Simon Qin, Cong Baumgartner, Donald Yu, Metzler, ArXiv, abs/2106.12672Yi Tay, Vinh Tran, Sebastian Ruder, Jai Gupta, Hyung Won Chung, Dara Bahri, Zhen Qin, Simon Baumgartner, Cong Yu, and Donald Metzler. 2021. Charformer: Fast character transformers via gradient- based subword tokenization. ArXiv, abs/2106.12672.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. CoRR, abs/1706.03762.
Fusion of acoustic and linguistic information using supervised autoencoder for improved emotion recognition. Bogdan Vlasenko, Ravishankar Prasad, Mathew Magimai.-Doss , Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge. the 2nd on Multimodal Sentiment Analysis ChallengeBogdan Vlasenko, RaviShankar Prasad, and Mathew Magimai.-Doss. 2021. Fusion of acoustic and lin- guistic information using supervised autoencoder for improved emotion recognition. Proceedings of the 2nd on Multimodal Sentiment Analysis Challenge.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Remi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander Lhoest, Rush, 10.18653/v1/2020.emnlp-demos.6Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Remi Louf, Morgan Funtow- icz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander Rush. 2020. Trans- formers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
. Linting Xue, Aditya Barua, Noah Constant, Rami Al-Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2021. Byt5: Towards a token-free future with pre-trained byte-to-byte models. CoRR, abs/2105.13626Linting Xue, Aditya Barua, Noah Constant, Rami Al- Rfou, Sharan Narang, Mihir Kale, Adam Roberts, and Colin Raffel. 2021. Byt5: Towards a token-free future with pre-trained byte-to-byte models. CoRR, abs/2105.13626. |
7,361,008 | Opportunities and Obligations to Take Turns in Collaborative Multi-Party Human-Robot Interaction | In this paper we present a data-driven model for detecting opportunities and obligations for a robot to take turns in multi-party discussions about objects. The data used for the model was collected in a public setting, where the robot head Furhat played a collaborative card sorting game together with two users. The model makes a combined detection of addressee and turn-yielding cues, using multi-modal data from voice activity, syntax, prosody, head pose, movement of cards, and dialogue context. The best result for a binary decision is achieved when several modalities are combined, giving a weighted F 1 score of 0.876 on data from a previously unseen interaction, using only automatically extractable features. | [
1436826,
1478981
] | Opportunities and Obligations to Take Turns in Collaborative Multi-Party Human-Robot Interaction
Association for Computational LinguisticsCopyright Association for Computational LinguisticsSeptember 2015. 2015
Martin Johansson
Department of Speech Music and Hearing
KTH Stockholm
Sweden
Gabriel Skantze skantze@kth.se
Department of Speech Music and Hearing
KTH Stockholm
Sweden
Opportunities and Obligations to Take Turns in Collaborative Multi-Party Human-Robot Interaction
Proceedings of the SIGDIAL 2015 Conference
the SIGDIAL 2015 ConferencePrague, Czech Republic, 2-4Association for Computational LinguisticsSeptember 2015. 2015
In this paper we present a data-driven model for detecting opportunities and obligations for a robot to take turns in multi-party discussions about objects. The data used for the model was collected in a public setting, where the robot head Furhat played a collaborative card sorting game together with two users. The model makes a combined detection of addressee and turn-yielding cues, using multi-modal data from voice activity, syntax, prosody, head pose, movement of cards, and dialogue context. The best result for a binary decision is achieved when several modalities are combined, giving a weighted F 1 score of 0.876 on data from a previously unseen interaction, using only automatically extractable features.
Introduction
Robots of the future are envisioned to help people perform tasks, not only as mere tools, but as autonomous agents interacting and solving problems together with humans. Such interaction will be characterised by two important features that need to be taken into account when modelling the spoken interaction. Firstly, the robot should be able to solve problems together with several humans (and possibly other robots) at the same time, which means that we need to model multiparty interaction. Secondly, joint problem solving is in many cases situated, which means that the spoken discourse will involve references to, and manipulation of, objects in the shared physical space. When speaking about objects, humans typically pay attention to these objects and gaze at them. Also, placing or moving an object can be regarded as a communicative act in itself (Clark, 2005). To solve the task efficiently, interlocutors need to coordinate their attention, result-ing in so-called joint attention (Clark & Marshall, 1981).
These characteristics of human-robot interaction pose many challenges for spoken dialogue systems. In this paper, we address the problem of turn-taking, which is a central problem for all spoken dialogue systems, but which is especially challenging when several interlocutors are involved. In multi-party interaction, the system does not only have to determine when a speaker yields the turn, but also whether it is yielded to the system or to someone else. This becomes even more problematic when the discussion involves objects in a shared physical space. For example, an obvious signal that humans use for yielding the turn in a face-to-face setting is to gaze at the next speaker (Vertegaal et al., 2001). However, in situated interaction, where the gaze is also used to pay attention to the objects which are under discussion, it is not obvious how this shared resource is used. While modelling all these aspects of the interaction is indeed challenging, the multi-modal nature of human-robot interaction also has the promise of offering redundant information that the system can utilize, thereby possibly increasing the robustness of the system (Vinyals et al., 2012).
The aim of this study is to develop a datadriven model that can be used by the system to decide when to take the turn and not. While there are many previous studies that have built such models based on human-human (Koiso et al., 1998;Morency et al., 2008) or human-machine interaction (Raux & Eskenazi, 2008;Skantze & Schlangen, 2009;Bohus & Horvitz, 2011;Meena et al., 2014), we are not aware of any previous studies that investigate multi-party human-robot discussions about objects.
The system that we build the model for, and use data from, is a collaborative game that was exhibited at the Swedish National Museum of Science and Technology in November 15-23, 2014. As can be seen in Figure 1, two visitors at a time could play a collaborative game together with the robot head Furhat (Al Moubayed et al., 2013). On the touch table between the players, a set of cards are shown. The two visitors and Furhat are given the task of sorting the cards according to some criterion. For example, the task could be to sort a set of inventions in the order they were invented, or a set of animals based on how fast they can run. This is a collaborative game, which means that the visitors have to discuss the solution together with Furhat. As we have discussed in previous work (Johansson et al., 2013), we think that the symmetry of the interaction is especially interesting from a turntaking perspective. The setting also provides a wide range of multi-modal features that can be exploited: voice activity, syntax, prosody, head pose, movement of cards, and dialogue context 1 .
The paper is organized as follows: In Section 2 we present and discuss related work, in Section 3 we describe the system and data annotation in more detail, in Section 4 we present the performance of the different machine learning algorithms and features sets, and in Section 5 we end with conclusions and a discussion of the results.
Background
Turn-taking in dialogue systems
Numerous studies have investigated how humans synchronize turn-taking in dialogue. In a seminal study, Duncan (1972) showed how speakers use prosody, syntax and gestures to signal whether the speaker wants to hold the turn or yield it to the interlocutor. For example, flat final pitch, syntactic incompleteness and filled pauses are strong cues to turn hold. In his analysis, Duncan found that as more turn yielding cues are presented together, the likelihood that the listener will try to take the turn increases. Later studies on human-human interaction have presented more thorough statistical analyses of turnyielding and turn-holding cues (Koiso et al., 1998;Gravano & Hirschberg, 2011). Typically, for speech-only interaction, syntactic and semantic completeness is found to be the strongest cue, but prosody can also be informative, especially if other cues are not available. In face-to-face interaction, gaze has been found to be a strong turntaking cue. Kendon (1967) found that the speaker gazes away from the listener during longer utterances, and then gazes at the listener as a turnyielding cue near the end of the utterance.
Contrary to this sophisticated combination of cues for managing turn-taking, dialogue systems have traditionally only used a fixed silence threshold after which the system responds. While this model simplifies processing, it fails to account for many aspects of human-human interaction such as hesitations, turn-taking with very short gaps or brief overlaps and backchannels in the middle of utterances (Heldner & Edlund, 2010). More advanced models for turn-taking have been presented, where the system interprets syntactic and prosodic cues to make continuous decisions on when to take the turn or give feedback, resulting in both faster response time and less interruptions (Raux & Eskenazi, 2008;Skantze & Schlangen, 2009;Meena et al., 2014).
Turn-taking in multi-party interaction
Multi-party interaction differs from dyadic interaction in several ways (Traum & Rickel, 2001). First, in a dyadic interaction there are only two different roles that the speakers can have: speaker and listener. In multi-party interaction, humans may take on many different roles, such as side participant, overhearer and bystander (Mutlu et al., 2012). Second, in dyadic interaction, it is always clear who is to speak next at turn shifts. In multi-party interaction, this has to be coordinated somehow. The most obvious signal is to use gaze to select the next speaker (Vertegaal et al., 2001). Thus, for multi-party interaction between a robot and several users, gaze is a valuable feature for detecting the addressee. Gaze tracking is however not trivial to utilize in many practical settings, since they typically have a limited in field-of-view, or (if head worn) are too invasive. In addition, they are not very robust to blinking or occlusion, and typically need calibration. Many systems therefore rely on head pose tracking, which is a simpler and more robust approach, but which cannot capture quick glances or track more precise gaze targets. However, previous studies have found head pose to be a fairly reliable indicator for gaze in multi-party interaction, given that the targets are clearly separated (Katzenmaier et al., 2004;Stiefelhagen & Zhu, 2002;Ba & Odobez, 2009). In addition to head pose, there are also studies which show that the addressee detection in human-machine interaction can be improved by also considering the speech signal, as humans typically talk differently to the machine compared to other humans (Shriberg et al., 2013). Vinyals et al. (2012) present an approach where the addressee detection is done using a large set of multi-modal features.
In situated interaction, speakers also naturally look at the objects which are under discussion. The speaker's gaze can therefore be used by the listener as a cue to the speaker's current focus of attention. This has been shown to clearly affect the extent to which humans otherwise gaze at each other to yield the turn. Argyle & Graham (1976) studied dyadic interactions involving additional targets for visual attention. Objects relevant to the task at hand were found to attract visual attention at the expense of the other subject. In a study on modelling turn-taking in threeparty poster conversations, Kawahara et al. (2012) found that the participants almost always looked at the shared poster. Also, in most studies on human-robot interaction, the robot has a clear "function", and it is therefore obvious that the user is either addressing the machine or another human. However, in a previous study on multiparty human-robot discussion about objects (Johansson et al., 2013), which had a task that is very similar to the one used here, we found that the addressee of utterances is not so easy to determine. Sometimes, a question might be posed directly to the robot, which then results in an obligation to take the turn. But many times, utter-ances in multi-party discussions are not targeted towards a specific person, but rather to both interlocutors, resulting in an opportunity to take the turn.
The approach taken in this study is therefore to combine the turn taking and addressee detection into one decision: Should the system take the turn or not?, and then allow a gradual answer from a clear "no" (0) to a clear "yes" (1). If the answer is 0, it could be because a speaker is holding the turn, or that a question was clearly posed to someone else. If the answer is 1, the system is obliged to respond, most likely because one of the users has asked a question directly to the robot. But in many cases, the answer could be somewhere in between, indicating an opportunity to respond. In future work, we plan to use such a score together with a utility function in a decision-theoretic framework (Bohus & Horvitz, 2011). Thus, if the system has something urgent to say, it could do so even in a non-optimal location, whereas if what it has to say is not so important, this would require an obligation in order to respond
Data collection and annotation
System description
As described in the introduction, we use data from a multi-party human-robot interaction game that was exhibited in a public setting. The system was implemented using the open source dialogue system framework IrisTK (Skantze & Al Moubayed, 2012) and is schematically illustrated in Figure 1. The visitors are interacting with the Furhat robot head (Al Moubayed et al., 2013), which has an animated face back-projected on a translucent mask, as well as a mechanical neck, which allows Furhat to signal his focus of attention using a combination of head pose and eyegaze. A Kinect camera (V2) is used to track the location and rotation of the two users' heads, as well as their hands. This data, together with the position of the five cards on the touch table are sent to a Situation model, which maintains a 3D representation of the situation. Two behaviour controllers based on the Harel statechart mechanism offered by IrisTK run in parallel: The Dialog Flow and the Attention Flow. The Attention Flow keeps Furhat's attention to a specified target (a user or a card), even when the target is moving, by consulting the Situation model. The 3D position of the target is then transformed into neck and gaze movement of Furhat (again taking Furhat's position in the 3D space into account).
This, together with the 3D design of Furhat, makes it possible to maintain exclusive mutual gaze with the users, and to let them infer the target of Furhat's gaze when directed towards the cards, in order to maintain joint attention . Although the system can be configured to use the array microphone in the Kinect camera, we used close talking microphones in the museum. The main motivation for this is that the Kinect array microphone cannot separate the sound sources from the two users and we wanted to be able to run parallel speech recognizers for both users in order to capture overlapping speech (for both online and offline analysis). The speech recognition is done with two parallel cloud-based large vocabulary speech recognizers, Nuance NDEV mobile 2 , which allows Furhat to understand the users even when they are talking simultaneously.
The Dialogue Flow module orchestrates the spoken interaction, based on input from the speech recognizers, together with events from the Situation model (such as cards being moved, or someone leaving or entering the interaction). The head pose of the users is used to make a simple decision of whether Furhat is being addressed. The game is collaborative, which means that the visitors have to discuss the solution together with Furhat. However, Furhat does not have perfect knowledge about the solution. Instead, Furhat's behaviour is motivated by a randomized belief model. This means that visitors have to determine whether they should trust Furhat's belief or not, just like they have to do with each other. Thus, Furhat's role in the interaction is similar to that of the visitors, as opposed to for example a tutor role which is often given to robots in similar settings. An excerpt from an interaction is shown in Figure 2, illustrating both clear turn changes and turns with overlapping speech.
Collected Data
The dialog system was exhibited at the Swedish National Museum of Science and Technology, in November 15-23, 2014. During the 9 days the system was exhibited, we recorded data from 373 interactions with the system, with an average length of 4.5 minutes. The dataset contains mixed ages: both adults playing with each other (40%), children playing with adults (27%), and children playing with each other (33%). For the present study, 9 dialogues were selected for training and tuning the turn-taking model, and one dialogue was selected for final evaluation and for verification of the annotation scheme.
Data Annotation
In order to build a supervised machine learning model for detecting turn-taking cues, we need some kind of ground truth. There have been different approaches to deriving the ground truth in previous studies. In studies of human-human interaction, the behaviour of the other interlocutor is typically used as a ground truth (Koiso et al., 1998;Morency et al., 2008). The problem with this approach is that much turn-taking behaviour is optional, and these studies typically report a relatively poor accuracy (albeit better than baseline). Also, it is not clear to what extent they can be applied to human-machine interaction.
In this paper we follow the approach taken in Meena et al. (2014) to manually annotate appropriate places to take the turn. Although this is quite labour intensive, we think that this is the best method to obtain a consistent ground truth about potential turn-taking locations. To this end we used turn-taking decisions from one annotator (one of the authors), thus building models of one specific human's behaviour rather than an average of multiple humans' behaviour. However, as described further down, we have also evaluated the amount of agreement between this annotator with another annotator on the evaluation set.
Similarly to most previous studies on turntaking reported above, we treat the end of Inter-Pausal Units (IPUs) as potential turn-taking locations. Each channel of the recorded audio was first echo-cancelled and then automatically segmented into IPUs, using an energy-based Voice Activity Detector (VAD), with a maximum of 200ms internal silence. The logged utterances from the dialogue system were then added as a third track of IPUs. A decision point was defined after every segmented user IPU where the system had not been speaking in the last three seconds. Figure 3 presents an example of sequences of subject IPUs with the location of decision points overlaid. Note that we also include locations where the other speaker is still speaking (1 in the figure), since the other speaker might for example be talking to herself while the first speaker asks Furhat something. A set of 688 decision points from the 9 selected dialogues were annotated for turn-taking decisions. The annotator was presented with five seconds of audio and video taken from the robot's point of view. A turn-taking decision was then annotated on a continuous scale ranging from "Absolutely don't take the turn" to "Must take the turn". The scale was visually divided into four equally wide classes to guide the annotator. The first section "Don't" (35% of annotated instances) represents instances where it would be inappropriate to take the turn, for example because the other interlocutor was either the addressee or currently speaking. The next section, "If needed" (19%), covers cases where it is not really appropriate, but possible if the system has a clear reason for saying something, while "Good" (21%) covers instances where it would not be inappropriate to take the turn. The final section, "Obliged" (25%), represents instances where it would be inappropriate not to take the turn, for example when the system clearly was the sole addressee. The distribution of the decisions, illustrated in Figure 4, indicates a fairly even distribution across the x-axis, but with higher frequencies of annotations at the extremes of the scale.
For verification of the annotation scheme and final evaluation, we annotated a second set of 43 decision points from a tenth dialogue using both the original annotator and a second annotator. The inter-annotator agreement for the four classes was good, K w =0.772 (Cohen's Kappa, equal weights), and neither annotator classified any decision point as "Don't" when the other had classified it as "Obliged".
Results
For this analysis we will first focus on the classes "Don't" and "Obliged" to make a binary turntaking decision in section 4.1. We will then switch focus to the full range of annotations and predict turn-taking decisions numerically on a scale in section 4.2. Finally we evaluate the resulting models in 4.3 using annotations from a second annotator.
Binary Decision -Don't vs. Obliged
For every turn-taking decision the outcome will eventually be either to take the turn or to not. For the annotated classes "Don't" and "Obliged", there is a one-to-one mapping between the class and the correct turn-taking decisions. The classes "If needed" and "Good" on the other hand encode optional behaviour; both the decision to take the turn and to not take the turn can be considered correct at the same time, an opportunity to take the turn and not an obligation.
In this section we therefore build a model to distinguish between "Don't" and "Obliged". For this we explore the RIPPER (JRIP), Support Vector Machine (SVM) with linear kernel function and Multilayer Perceptron (MLP) classifiers
If needed Don't Good Obliged
Annotation value Frequency in the WEKA toolkit (Hall et al., 2009)
Baseline
The majority-class baseline, always providing the classification "Don't", yields a weighted F 1 score of 0.432.
Voice Activity Features
A very basic feature to consult before taking the turn is to listen if anyone is speaking. Using only this feature the weighted F 1 score reaches 0.734, significantly better than the baseline. In addition, we also use features to add context: The amount of time each of the system and the other interlocutor has been quiet, and the length of the last turn, defined as a sequence of IPUs without IPUs from other speakers in-between, as well as length of the last IPU for the system and each of the two interlocutors. Thus, the total of VAD features is 9. The "anyone speaking" feature is the single feature yielding the highest weighted F 1 score, performing on par with the combination of all VAD features (Table 1).
Prosodic Features
As prosodic features, we used final pitch and energy. A pitch tracker based on the Yin algorithm (de Cheveigné & Kawahara, 2002) was used to estimate the F 0 at a rate of 100 frames per second. The F 0 values were then transformed to log scale and z-normalized for each user. For each IPU, the last voiced frame was identified and then regions of 200ms and 500ms ending in this frame were selected. For these different regions, we calculated the mean, maximum, standard deviation and slope of the normalized F 0 values. To calculate the slope, we took the average pitch of the second half of the region minus the average of the first half. Additionally, we calculated the maximum and standard devia-tion of the normalized F 0 values over the full IPU. We also Z-normalized the energy of the voiced frames and then calculated the maximum energy for the 200ms and 500ms regions and the full IPU. Thus, we used 13 prosodic features in total. Using MLP on the combination of all features yielded the highest weighted F 1 score (0.649, see Table 1). The features based on pitch were more useful than the ones based on energy.
Syntactic Features
Syntax has been shown to be a strong turnyielding cue in previous studies (Koiso et al., 1998;Meena et al., 2014). For example, hesitations can occur in the middle of syntactic constructions, whereas turn ends are typically syntactically complete. In previous studies, the partof-speech (POS) of the last two words has been shown to be a useful feature. Thus, we use the POS of the last two words in an IPU as a bigram. The POS tags were automatically extracted using Stagger (Östling, 2013) based on results from cloud-based large vocabulary speech recognizers, Nuance NDEV mobile ASR, as an automated system would need to rely on ASR. Despite a word error rate (WER) of 63.1% (SD=39.0) for the recognized IPUs, the generated POS feature performed significantly better than the baseline (Table 1). However, the increase is not very high compared to previous studies. This could both be due to the relatively high WER, but also due to the fact that syntax in itself does not indicate the addressee of the utterance.
Head Pose Features
Unlike the other feature categories, head pose can be used to both yield the turn and to select the next speaker, and is therefore expected to be a strong feature for the current task. We represent the interlocutors' head poses in terms of angular distance between the direction of the interlocutor's head and the robot's head. The angular distance is made available as absolute angular distance as well as signed vertical and horizontal angular distance separately. The sign of the horizontal distance is adjusted to account for the mirrored position of the two interlocutors. This representation allows the system to infer if someone is looking at the system (low absolute distance), towards the table (negative vertical distance) or towards the other interlocutor (high horizontal distance).
The head pose features are generated separately for the speaker ending the IPU and the other interlocutor as well as in two composite versions representing the joint (maximum) and disjoint (minimum) distance. The features are generated both at the end of the speech in the IPU and at the time of the decision point. Thus, there are a total of 24 features available for estimating visual focus of attention. Sorting the individual features from highest weighted F 1 score to lowest, we get the following top four groups in order: Last speaker (end of speech), last speaker (decision), disjoint (decision) and then joint (end of speech). As expected, the use of head pose gives a significantly better result than the baseline (Table 1).
Card Movement
The activity of the game table is represented in terms of card movement activity via 3 feature types. Note that we only know if a card is being moved, but not by whom. The first feature type is the duration of ongoing card movement. If no card is being moved at the moment, the value is set to 0. The second feature type is the duration of the most recently completed card movement. The final feature type is the time passed since the last movement of any card. These features are generated for two points in time; the end of the IPU relating to the decision point and the time when the decision is to be made. Thus, there are 6 card movement features in total. As can be seen in Table 1, this feature category alone performs significantly better than baseline, which is a bit surprising, given that the card movements are not necessarily linked to speech production and turn-taking.
The System's Previous Dialogue Act
To represent the dialogue context, we used the last system dialogue act as a feature. Whereas this feature gave a significant improvement in the data-driven models for dyadic turn-taking presented in Meena et al. (2014), it is the only feature category here that does not perform significantly better than the baseline (Table 1). The overall low performance of this feature could be due to the nature of multi-party dialogue, where the system doesn't necessarily have every second turn.
Combined Feature Categories
Until now we have only explored features where every category comprised one single modality. All feature categories, summarized in Table 1, have performed significantly better than the baseline with the exception of the system's last dialogue act.
In this section we explore the combinations of features from different modalities, summarized in Table 2. Combinations including head pose typically performed best. The maximum performance using automatically generated features is 0.851 using 5 feature categories: head pose, POS, card movements, prosody and the system's dialog act.
Regression Model
While the end result of a turn-taking decision has a binary outcome, the distribution of annotations on a scale (Figure 4) suggests that there are stronger and weaker decisions, reflecting opportunities and obligations to take turns. As discussed above, such a score could be used together with a utility to take turns in a decisiontheoretic framework. Thus, we also want to see whether it is possible to reproduce decisions on the scale. For this we explore the Gaussian Processes (GP) and Linear Regression (LR) classifiers in the WEKA toolkit. All results in this section are based on 10-fold cross-validation.
The individual feature categories have positive but low correlation coefficients (Table 3). Combining the feature categories with highest corre- Table 3: Correlation coefficient for different feature set combinations using Gaussian Processes (GP) and Linear Regression (LR) classifiers lation coefficients improve performance. The head pose in combination with VAD and card movements, using Gaussian Processes yields the highest correlation coefficient, 0.677.
Evaluation
We finally evaluated the best performing models built from the initial 9 dialogues on a separate test set of 43 decision points from a tenth dialogue, annotated both by the original annotator and a second annotator.
For the binary decision, we selected the MLP classifier with features from head pose, POS, card movements, prosody and the system's dialogue act. When evaluated on the test set annotated by the original annotator and the new annotator, the weighted F 1 score was 0.876 and 0.814 for 29 and 32 instances respectively. These are promising results, given the classifier's performance of 0.851 in the training set crossvalidation (Table 2) and that the test set was from a previously unseen interaction.
The regression model was evaluated using the Gaussian Processes classifier with features from head pose, VAD and card movement. The correlation coefficients for the original annotator and the new annotator were 0.5959 and 0.5647 over 43 instances each, compared to 0.677 in the training set cross-validation (Table 3). The lower values could be due to a different distribution of annotations in the test set and the relatively small data set.
Discussion and Conclusions
In this study we have developed data-driven models that can be used by a robot to decide when to take the turn and not in multi-party situated interaction. In the case of a simple binary decision on whether to take the turn or not, the weighted F 1 score of 0.876 on data from previously unseen interactions, using several modalities in combination, is indeed promising, given a relatively small training material of 9 interactions and 688 instances. The decision process for the annotator is also simplified by not making separate decisions for turn ending and addressee detection. It should also be pointed out that we have only relied on automatically extractable features that can be derived in an online system. We have also achieved promising results for a regression model that could be used to identify both opportunities and obligations to take turns.
We have observed that combining features from different modalities yield performance im-provements, and different combinations of features from diverse modalities can provide similar performance. This suggests that the multimodal redundancy indeed can be used to improve the robustness of the dialogue system. This is very relevant to the specific dialogue system in this study as head pose data sometimes is unavailable. Two possible remedies would be to only use classifiers that are robust against missing features, or to use multiple classifiers to step in when features are unavailable.
The results support that head pose, despite sometimes missing, is very useful for turn-taking decisions. This was expected, as head pose is the only of our available features that can be used to both select addressee and act as a turn-yielding cue. The results also indicate that POS provide useful information, even when based on ASR results with high WER. Provided that higher ASR performance becomes available, we could also benefit from other more sophisticated features, such as semantic completion (Gravano & Hirschberg, 2011), to predict turn-transition relevant places.
It is also interesting to see that the card movement is an important feature, as it suggests that moving of objects can be a dialogue act in itself, as discussed in Clark (2005). This makes situated dialogue systemswhere the discussion involves actions and manipulation of objectsdifferent from traditional dialogue systems, and should be taken into account when timing responses in such systems. This also suggests that it might be necessary to not just make turn-taking decisions at the end of IPUs, but rather continuous decisions. It is not obvious, however, how this would be annotated.
With the promising results of this study, we plan to expand on this work and integrate the turn-taking models into the live dialogue system, and see to what extent this improves the actual interaction. Of particular interest for future work is the regression model that could predict turntaking on a continuous scale, which could be integrated into a decision-theoretic framework, so that the system could also take into account to what extent it has something important to say.
Figure 1 :
1A schematic illustration of the dialogue system setting and architecture
Figure 2 :
2Dialogue fragment from an interaction (translated from Swedish). The shaded (green) track shows where Furhat's attention is directed. Card movements are illustrated in blue. Users' head poses are illustrated with red plots, where a high y-value means the angular distance towards Furhat is small.
Figure 3 :
3Four numbered decision points
Figure 4 :
4Histogram of annotated decisions on a scale from 0 (must not take turn) to 1 (must take turn)
Table Attention
AttentionFlow
Neck
Face
TTS
Dialog
Flow
VFOA
Speech
Heads and hands
Card positions
Weighted F 1 score of the feature categories used in isolation. Results significantly better than baseline are marked with *., using the
default parameters. All results in this section are
based on 10-fold cross-validation. For statistical
analysis, we have used two-tailed tests and cho-
sen an alpha level of 0.05.
Features
JRIP SVM MLP
VAD *
0.727 0.734 0.723
Head pose *
0.690 0.724 0.709
Cards *
0.717 0.526 0.671
Prosody *
0.648 0.574 0.649
POS *
0.602 0.630 0.634
System DA
0.506 0.506 0.500
Table 1:
Weighted F 1 score for different feature set combinations using RIPPER (JRIP), Support Vector Machine (SVM) and Multilayer Perceptron (MLP) classifiersFeatures
JRIP
SVM MLP
Head pose (HP)
0.690
0.724 0.709
HP+VAD
0.742
0.786 0.764
HP+Cards (C)
0.780
0.753 0.772
HP+Prosody (P)
0.700
0.698 0.789
HP+POS
0.754
0.731 0.772
HP+System DA (SDA)
0.725
0.739 0.728
Best combination
HP+POS+C+P+SDA
0.745
0.796 0.851
Table 2: Features
GP
LR
System DA
0.090
0.129
Prosody
0.146
0.135
POS
0.193
0.188
Cards
0.351
0.226
VAD
0.416
0.368
Head Pose (HP)
0.447
0.376
HP+System DA
0.482
0.373
HP+Prosody
0.500
0.377
HP+POS
0.471
0.393
HP+Cards
0.572
0.431
HP+VAD
0.611
0.523
Best combination
HP+VAD+Cards
0.677
0.580
A video of the interaction can be seen at https://www.youtube.com/watch?v=5fhjuGu3d0I
http://dragonmobile.nuancemobiledeveloper.com/
AcknowledgementsThis work is supported by the Swedish research council (VR) project Coordination of Attention and Turntaking in Situated Interaction (2013-1403, PI: Gabriel Skantze).Appendix A. Gameplay Interaction -One Complete Round
The Furhat Back-Projected Humanoid Head -Lip reading, Gaze and Multiparty Interaction. Al Moubayed, S Skantze, G Beskow, J , International Journal of Humanoid Robotics. 101Al Moubayed, S., Skantze, G., & Beskow, J. (2013). The Furhat Back-Projected Humanoid Head -Lip reading, Gaze and Multiparty Interaction. Interna- tional Journal of Humanoid Robotics, 10(1).
The central Europe experiment: Looking at persons and looking at objects. M Argyle, J A Graham, Environmental Psychology and Nonverbal Behavior. 11Argyle, M., & Graham, J. A. (1976). The central Eu- rope experiment: Looking at persons and looking at objects. Environmental Psychology and Nonverbal Behavior, 1(1), 6-16.
Recognizing visual focus of attention from head pose in natural meetings. S O Ba, J-M Odobez, IEEE Transactions on Systems, Man, and Cybernetics. 39Ba, S. O., & Odobez, J-M. (2009). Recognizing visual focus of attention from head pose in natural meet- ings. IEEE Transactions on Systems, Man, and Cy- bernetics, Part B: Cybernetics, 39(1), 16-33.
Decisions about turns in multiparty conversation: from perception to action. D Bohus, E Horvitz, ICMI '11 Proceedings of the 13th international conference on multimodal interfaces. Bohus, D., & Horvitz, E. (2011). Decisions about turns in multiparty conversation: from perception to action. In ICMI '11 Proceedings of the 13th in- ternational conference on multimodal interfaces (pp. 153-160).
Definite reference and mutual knowledge. H H Clark, C R Marshall, Elements of discourse understanding. Joshi, A. K., Webber, B. L., & Sag, I. A.Cambridge, EnglandCambridge University PressClark, H. H., & Marshall, C. R. (1981). Definite refer- ence and mutual knowledge. In Joshi, A. K., Web- ber, B. L., & Sag, I. A. (Eds.), Elements of dis- course understanding (pp. 10-63). Cambridge, England: Cambridge University Press.
Coordinating with each other in a material world. H H Clark, Discourse studies. 7Clark, H. H. (2005). Coordinating with each other in a material world. Discourse studies, 7(4-5), 507-525.
Some Signals and Rules for Taking Speaking Turns in Conversations. S Duncan, J. of Personality and Social Psychology. 232Duncan, S. (1972). Some Signals and Rules for Tak- ing Speaking Turns in Conversations. J. of Per- sonality and Social Psychology, 23(2), 283-292.
Turn-taking cues in task-oriented dialogue. A Gravano, J Hirschberg, Computer Speech & Language. 253Gravano, A., & Hirschberg, J. (2011). Turn-taking cues in task-oriented dialogue.. Computer Speech & Language, 25(3), 601-634.
M Hall, E Frank, G Holmes, B Pfahringer, P Reutemann, I H Witten, The WEKA Data Mining Software: An Update. SIGKDD Explorations. 11Hall, M., Frank, E., Holmes, G., Pfahringer, B., Reutemann, P., & Witten, I. H. (2009). The WEKA Data Mining Software: An Update. SIGKDD Ex- plorations, 11(1).
Pauses, gaps and overlaps in conversations. M Heldner, J Edlund, Journal of Phonetics. 38Heldner, M., & Edlund, J. (2010). Pauses, gaps and overlaps in conversations. Journal of Phonetics, 38, 555-568.
Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions. M Johansson, G Skantze, J Gustafson, International Conference on Social Robotics -ICSR 2013. Bristol, UKJohansson, M., Skantze, G., & Gustafson, J. (2013). Head Pose Patterns in Multiparty Human-Robot Team-Building Interactions. In International Con- ference on Social Robotics -ICSR 2013. Bristol, UK.
Identifying the Addressee in Human-Human-Robot Interactions based on Head Pose and Speech. M Katzenmaier, R Stiefelhagen, T Schultz, I Rogina, A Waibel, Proceedings of International Conference on Multimodal Interfaces ICMI. International Conference on Multimodal Interfaces ICMIPA, USA; State CollegeKatzenmaier, M., Stiefelhagen, R., Schultz, T., Rogi- na, I., & Waibel, A. (2004). Identifying the Add- ressee in Human-Human-Robot Interactions based on Head Pose and Speech. In Proceedings of Inter- national Conference on Multimodal Interfaces ICMI 2004. PA, USA: State College.
Prediction of Turn-Taking by Combining Prosodic and Eye-Gaze Information in Poster Conversations. T Kawahara, T Iwatate, K Takanashi, InterspeechKawahara, T., Iwatate, T., & Takanashi, K. (2012). Prediction of Turn-Taking by Combining Prosodic and Eye-Gaze Information in Poster Conversa- tions.. In Interspeech 2012.
Some functions of gaze direction in social interaction. A Kendon, Acta Psychologica. 26Kendon, A. (1967). Some functions of gaze direction in social interaction. Acta Psychologica, 26, 22-63.
An analysis of turn-taking and backchannels based on prosodic and syntactic features in Japanese Map Task dialogs. H Koiso, Y Horiuchi, S Tutiya, A Ichikawa, Y Den, Language and Speech. 41Koiso, H., Horiuchi, Y., Tutiya, S., Ichikawa, A., & Den, Y. (1998). An analysis of turn-taking and backchannels based on prosodic and syntactic fea- tures in Japanese Map Task dialogs. Language and Speech, 41, 295-321.
Datadriven Models for timing feedback responses in a Map Task dialogue system. R Meena, G Skantze, J Gustafson, Computer Speech and Language. 284Meena, R., Skantze, G., & Gustafson, J. (2014). Data- driven Models for timing feedback responses in a Map Task dialogue system. Computer Speech and Language, 28(4), 903-922.
Predicting listener backchannels: A probabilistic multimodal approach. L P Morency, I De Kok, J Gratch, Proceedings of IVA. IVATokyo, JapanMorency, L. P., de Kok, I., & Gratch, J. (2008). Pre- dicting listener backchannels: A probabilistic mul- timodal approach. In Proceedings of IVA (pp. 176- 190). Tokyo, Japan.
Conversational Gaze Mechanisms for Humanlike Robots. B Mutlu, T Kanda, J Forlizzi, J Hodgins, H Ishiguro, ACM Trans. Interact. Intell. Syst. 1233Mutlu, B., Kanda, T., Forlizzi, J., Hodgins, J., & Ishi- guro, H. (2012). Conversational Gaze Mechanisms for Humanlike Robots. ACM Trans. Interact. Intell. Syst., 1(2), 12:1-12:33.
Optimizing endpointing thresholds using dialogue features in a spoken dialogue system. A Raux, M Eskenazi, Proceedings of SIGdial. SIGdialColumbus, OH, USARaux, A., & Eskenazi, M. (2008). Optimizing end- pointing thresholds using dialogue features in a spoken dialogue system. In Proceedings of SIGdial 2008. Columbus, OH, USA.
Addressee detection for dialog systems using temporal and spectral dimensions of speaking style. E Shriberg, A Stolcke, S Ravuri, Interspeech 2013. Shriberg, E., Stolcke, A., & Ravuri, S. (2013). Ad- dressee detection for dialog systems using temporal and spectral dimensions of speaking style. In Inter- speech 2013 (pp. 2559-2563).
IrisTK: a statechart-based toolkit for multi-party face-to-face interaction. G Skantze, S Moubayed, Proceedings of ICMI. ICMISanta Monica, CASkantze, G., & Al Moubayed, S. (2012). IrisTK: a statechart-based toolkit for multi-party face-to-face interaction. In Proceedings of ICMI. Santa Monica, CA.
Incremental dialogue processing in a micro-domain. G Skantze, D Schlangen, Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-09. the 12th Conference of the European Chapter of the Association for Computational Linguistics (EACL-09Athens, GreeceSkantze, G., & Schlangen, D. (2009). Incremental dialogue processing in a micro-domain. In Pro- ceedings of the 12th Conference of the European Chapter of the Association for Computational Lin- guistics (EACL-09). Athens, Greece.
Turn-taking. G Skantze, A Hjalmarsson, C Oertel, Feedback and Joint Attention in Situated Human-Robot Interaction. Speech Communication. 65Skantze, G., Hjalmarsson, A., & Oertel, C. (2014). Turn-taking, Feedback and Joint Attention in Situ- ated Human-Robot Interaction. Speech Communi- cation, 65, 50-66.
Head orientation and gaze direction in meetings. R Stiefelhagen, J Zhu, CHI '02 Extended Abstracts on Human Factors in Computing Systems. Stiefelhagen, R., & Zhu, J. (2002). Head orientation and gaze direction in meetings. In CHI '02 Ex- tended Abstracts on Human Factors in Computing Systems (pp. 858-859).
Embodied Agents for Multi-party Dialogue in Immersive Virtual Worlds. D Traum, J Rickel, Proc. of IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems. of IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue SystemsSeattle, WA, USTraum, D., & Rickel, J. (2001). Embodied Agents for Multi-party Dialogue in Immersive Virtual Worlds. In Proc. of IJCAI Workshop on Knowledge and Reasoning in Practical Dialogue Systems (pp. 766- 773). Seattle, WA, US.
Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. R Vertegaal, R Slagter, G Van Der Veer, A Nijholt, Proceedings of ACM Conf. on Human Factors in Computing Systems. ACM Conf. on Human Factors in Computing SystemsVertegaal, R., Slagter, R., van der Veer, G., & Nijholt, A. (2001). Eye gaze patterns in conversations: there is more to conversational agents than meets the eyes. In Proceedings of ACM Conf. on Human Factors in Computing Systems.
Learning speaker, addressee and overlap detection models from multimodal streams. O Vinyals, D Bohus, R Caruana, Proceedings of the 14th ACM international conference on Multimodal interaction. the 14th ACM international conference on Multimodal interactionVinyals, O., Bohus, D., & Caruana, R. (2012). Learn- ing speaker, addressee and overlap detection mod- els from multimodal streams. In Proceedings of the 14th ACM international conference on Multimodal interaction (pp. 417-424).
YIN, a fundamental frequency estimator for speech and music. A De Cheveigné, H Kawahara, The Journal of the Acoustical Society of America. 1114de Cheveigné, A., & Kawahara, H. (2002). YIN, a fundamental frequency estimator for speech and music. The Journal of the Acoustical Society of America, 111(4), 1917-1930.
Stagger: An open-source part of speech tagger for Swedish. R Östling, Northern European Journal of Language Technology (NEJLT). 3Östling, R. (2013). Stagger: An open-source part of speech tagger for Swedish. Northern European Journal of Language Technology (NEJLT), 3, 1-18. |
11,609,990 | A coherence model based on syntactic patterns | We introduce a model of coherence which captures the intentional discourse structure in text. Our work is based on the hypothesis that syntax provides a proxy for the communicative goal of a sentence and therefore the sequence of sentences in a coherent discourse should exhibit detectable structural patterns. Results show that our method has high discriminating power for separating out coherent and incoherent news articles reaching accuracies of up to 90%. We also show that our syntactic patterns are correlated with manual annotations of intentional structure for academic conference articles and can successfully predict the coherence of abstract, introduction and related work sections of these articles. | [
1906241,
2717698,
5942882,
6204420,
3956381,
6961896,
11599080,
252796,
536951,
5461896,
92531,
1421908,
14001975,
2937659,
2570492,
8989309,
9482302,
10037247
] | A coherence model based on syntactic patterns
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2012. 2012
Annie Louis
University of Pennsylvania Philadelphia
19104PAUSA
Ani Nenkova nenkova@seas.upenn.edu
University of Pennsylvania Philadelphia
19104PAUSA
A coherence model based on syntactic patterns
Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning
the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language LearningJeju Island, KoreaAssociation for Computational LinguisticsJuly 2012. 2012
We introduce a model of coherence which captures the intentional discourse structure in text. Our work is based on the hypothesis that syntax provides a proxy for the communicative goal of a sentence and therefore the sequence of sentences in a coherent discourse should exhibit detectable structural patterns. Results show that our method has high discriminating power for separating out coherent and incoherent news articles reaching accuracies of up to 90%. We also show that our syntactic patterns are correlated with manual annotations of intentional structure for academic conference articles and can successfully predict the coherence of abstract, introduction and related work sections of these articles.
Introduction
Recent studies have introduced successful automatic methods to predict the structure and coherence of texts. They include entity approaches for local coherence which track the repetition and syntactic realization of entities in adjacent sentences (Barzilay and Lapata, 2008;Elsner and Charniak, 2008) and content approaches for global coherence which view texts as a sequence of topics, each characterized by a particular distribution of lexical items (Barzilay and Lee, 2004;Fung and Ngai, 2006). Other work has shown that co-occurrence of words (Lapata, 2003;Soricut and Marcu, 2006) and discourse relations (Pitler and Nenkova, 2008;Lin et al., 2011) also predict coherence.
Early theories (Grosz and Sidner, 1986) posited that there are three factors which collectively con-tribute to coherence: intentional structure (purpose of discourse), attentional structure (what items are discussed) and the organization of discourse segments. The highly successful entity approaches capture attentional structure and content approaches are related to topic segments but intentional structure has largely been neglected. Every discourse has a purpose: explaining a concept, narrating an event, critiquing an idea and so on. As a result each sentence in the article has a communicative goal and the sequence of goals helps the author achieve the discourse purpose. In this work, we introduce a model to capture coherence from the intentional structure dimension. Our key proposal is that syntactic patterns are a useful proxy for intentional structure.
This idea is motivated from the fact that certain sentence types such as questions and definitions have distinguishable and unique syntactic structure. For example, consider the opening sentences of two descriptive articles 1 shown in Table 1. Sentences (1a) and (2a) are typical instances of definition sentences. Definitions are written with the concept to be defined expressed as a noun phrase followed by a copular verb (is/are). The predicate contains two parts: the first is a noun phrase reporting the concept as part of a larger class (eg. an aqueduct is a water supply), the second component is a relative clause listing unique properties of the concept. These are examples of syntactic patterns related to the communicative goals of individual sentences. Similarly, sentences (1b) and (2b) which provide further details about the concept also have some distinguish-1a) An aqueduct is a water supply or navigable channel constructed to convey water. b) In modern engineering, the term is used for any system of pipes, canals, tunnels, and other structures used for this purpose. 2a) Cytokine receptors are receptors that binds cytokines. b) In recent years, the cytokine receptors have come to demand more attention because their deficiency has now been directly linked to certain debilitating immunodeficiency states. ing syntactic features such as the presence of a topicalized phrase providing the focus of the sentence. The two sets of sentences have similar sequence of communicative goals and so we can expect the syntax of adjacent sentences to also be related.
We aim to characterize this relationship on a broad scale using a coherence model based entirely on syntax. The model relies on two assumptions which summarize our intuitions about syntax and intentional structure:
1. Sentences with similar syntax are likely to have the same communicative goal.
2. Regularities in intentional structure will be manifested in syntactic regularities between adjacent sentences.
There is also evidence from recent work that supports these assumptions. Cheung and Penn (2010) find that a better syntactic parse of a sentence can be derived when the syntax of adjacent sentences is also taken into account. Lin et al. (2009) report that the syntactic productions in adjacent sentences are powerful features for predicting which discourse relation (cause, contrast, etc.) holds between them. Cocco et al. (2011) show that significant associations exist between certain part of speech tags and sentence types such as explanation, dialog and argumentation.
In our model, syntax is represented either as parse tree productions or a sequence of phrasal nodes augmented with part of speech tags. Our best performing method uses a Hidden Markov Model to learn the patterns in these syntactic items. Sections 3 and 5 discuss the representations and their specific implementations and relative advantages. Results show that syntax models can distinguish coherent and incoherent news articles from two domains with 75-90% accuracies over a 50% baseline. In addition, the syntax coherence scores turn out complementary to scores given by lexical and entity models.
We also study our models' predictions on academic articles, a genre where intentional structure is widely studied. Sections in these articles have well-defined purposes and we find recurring sentence types such as motivation, citations, description, and speculations. There is a large body of work (Swales, 1990;Teufel et al., 1999;Liakata et al., 2010) concerned with defining and annotating these sentence types (called zones) in conference articles. In Section 6, we describe how indeed some patterns captured by the syntax-based models are correlated with zone categories that were proposed in prior literature. We also present results on coherence prediction: our model can distinguish the introduction section of conference papers from its perturbed versions with over 70% accuracy. Further, our model is able to identify conference from workshop papers with good accuracies, given that we can expect these articles to vary in purpose.
Evidence for syntactic coherence
We first present a pilot study that confirms that adjacent sentences in discourse exhibit stable patterns of syntactic co-occurrence. This study validates our second assumption relating the syntax of adjacent sentences. Later in Section 6, we examine syntactic patterns in individual sentences (assumption 1) using a corpus of academic articles where sentences were manually annotated with communicative goals.
Prior work has reported that certain grammatical productions are repeated in adjacent sentences more often than would be expected by chance (Reitter et al., 2006;Cheung and Penn, 2010). We analyze all co-occurrence patterns rather than just repetitions.
We use the gold standard parse trees from the Penn Treebank (Marcus et al., 1994). Our unit of analysis is a pair of adjacent sentences (S 1 , S 2 ) and we choose to use Section 0 of the corpus which has 99 documents and 1727 sentence pairs. We enumerate all productions that appear in the syntactic parse of any sentence and exclude those that appear less than 25 times, resulting in a list of 197 unique productions. Then all ordered pairs 2 (p 1 , p 2 ) of productions are formed. For each pair, we compute "It has to be considered as an additional risk for the investor," ["Cray Computer will be a concept"
S-TPC-1 → NP-SBJ VP said Gary P. Smaby of Smaby Group Inc., [Minneapolis]NP-LOC. "stock,"]S-TPC-1 he said. the following: c(p 1 p 2 ) = number of sentence pairs where p 1 ∈ S 1 and p 2 ∈ S 2 ; c(p 1 ¬p 2 ) = number of pairs where p 1 ∈ S 1 and p 2 ∈ S 2 ; c(¬p 1 p 2 ) and c(¬p 1 ¬p 2 ) are computed similarly. Then we perform a chi-square test to understand if the observed count c(p 1 p 2 ) is significantly (95% confidence level) greater or lesser than the expected value if occurrences of p 1 and p 2 were independent. Of the 38,809 production pairs, we found that 1,168 pairs occurred in consecutive sentences significantly more often than chance and 172 appeared significantly fewer times than expected. In Table 2 we list, grouped in three simple categories, the 25 pairs of the first kind with most significant p-values.
Some of the preferred pairs are indeed repetitions as pointed out by prior work. But they form only a small fraction (5%) of the total preferred production pairs indicating that there are several other classes of syntactic regularities beyond priming. Some of these other sequences can be explained by the fact that these articles come from the finance domain: they involve productions containing numbers and quantities. An example for this type is shown in Table 2. Finally, there is also a class that is not repetitions or readily observed as domain-specific. The most frequent one reflects a pattern where the first sentence introduces a subject and predicate and the subject in the second sentence is pronominalized. Examples for two other patterns are given in Table 2. For the sequence (VP → VB VP | NP-SBJ → NNP NNP), a bare verb is present in S 1 and is often associated with modals. In the corpus, these statements often present hypothesis or speculation. The following sentence S 2 has an entity, a person or organization, giving an explanation or opinion on the statement. This pattern roughly correponds to a SPECU-LATE followed by ENDORSE sequence of intentions. Similarly, in all the six adjacent sentence pairs from our corpus containing the items (NP-LOC → NNP | S-TPC-1 → NP-SBJ VP), p 1 introduces a location name, and is often associated with the title of a person or organization. The next sentence has a quote from that person, where the quotation forms the topicalized clause in p 2 . Here the intentional structure is INTRODUCE X / STATEMENT BY X.
p 1 p 2 c(p 1 p 2 ) -Repetition - VP → VBD SBAR VP → VBD SBAR 83 QP → $ CD CD QP → $ CD CD 18 NP → $ CD -NONE- NP → $ CD -NONE- 16 NP → QP -NONE- NP → QP -NONE- 15 NP-ADV → DT NN NP-ADV → DT NN 10 NP → NP NP-ADV NP → NP NP-ADV 7 -Quantities/Amounts - NP → QP -NONE- QP → $ CD CD 16 QP → $ CD CD NP → QP -NONE- 15 NP → NP NP-ADV NP → QP -NONE- 11 NP-ADV → DT NN NP → QP -NONE- 11 NP → NP NP-ADV NP-ADV → DT NN 9 NP → $ CD -NONE- NP-ADV → DT NN 8 NP-ADV → DT NN NP → $ CD -NONE- 8 NP-ADV → DT NN NP → NP NP-ADV 8 NP → NP NP-ADV QP → CD CD 6 -Other - S → NP-SBJ VP NP-SBJ → PRP 290 VP → VBD SBAR PP-TMP → IN NP 79 S → NP-SBJ-1 VP VP → VBD SBAR 43 VP → VBD NP VP → VBD VP 31 VP → VB VP NP-SBJ → NNP NNP 27 NP-SBJ-1 → NNP NNP VP → VBD NP 13 VP → VBZ NP S → PP-TMP , NP-SBJ VP . 8 NP-SBJ → JJ NNS VP → VBP NP 8 NP-PRD → NP PP NP-PRD → NP SBAR 7 NP-LOC → NNP S-TPC-1 → NP-SBJ VP 6
In the remainder of the paper we formalize our representation of syntax and the derived model of coherence and test its efficacy in three domains. present two coherence models: a local model which captures the co-occurrence of structural features in adjacent sentences and a global one which learns from clusters of sentences with similar syntax.
Representing syntax
Our models rely exclusively on syntactic cues. We derive representations from constituent parses of the sentences, and terminals (words) are removed from the parse tree before any processing is done. The leaf nodes in our parse trees are part of speech tags. Productions: In this representation we view each sentence as the set of grammatical productions, LHS → RHS, which appear in the parse of the sentence. As we already pointed out, the right-hand side (RHS) contains only non-terminal nodes. This representation is straightforward, however, some productions can be rather specific with long right hand sides. Another apparent drawback of this representation is that it contains sequence information only about nodes that belong to the same constituent. d-sequence: In this representation we aim to preserve more sequence information about adjacent constituents in the sentence. The simplest approach would be to represent the sentence as the sequence of part of speech (POS) tags but then we lose all the abstraction provided by higher level nodes in tree. Instead, we introduce a more general representation, d-sequence where the level of abstraction can be controlled using a parameter d. The parse tree is truncated to depth at most d, and the leaves of the resulting tree listed left to right form the d-sequence representation. For example, in Figure 1, the line depicts the cutoff at depth 2.
Next the representation is further augmented; all phrasal nodes in the d-sequence are annotated (concatenated) with the left-most leaf that they dominate in the full non-lexicalized parse tree. This is shown as suffixes on the S, NP and VP nodes in the figure. Such annotation conveys richer information about the structure of the subtree below nodes in the d-sequence. For example, "the chairs", "his chairs", "comfortable chairs" will be represented as NP DT , NP PRP$ and NP JJ . In the resulting representations, sentences are viewed as sequences of syntactic words (w 1 ,w 2 ...,w k ), k ≤ p, where p is the length of the full POS sequence and each w i is either POS tag or a phrasal node+POS tag combination. In our example, at depth-2, the quotation sentence gets the representation (w1=" , w2=SDT , w3=, , w4=" , w5=NPNNP , w6=VPVBD , w7=.) where the actual quote is omitted. Sentences that contain attributions are likely to appear more similar to each other when compared using this representation in contrast to representations derived from word or POS sequence. The depth-3 sequence is also indicated in the figure.
The main verb of a sentence is central to its structure, so the parameter d is always set to be greater than that of the main verb and is tuned to optimize performance for coherence prediction.
Implementing the model
We adapt two models of coherence to operate over the two syntactic representations.
Local co-occurrence model
This model is a direct extension from our pilot study. It allows us to test the assumption that coherent discourse is characterized by syntactic regularities in adjacent sentences. We estimate the probabilities of pairs of syntactic items from adjacent sentences in the training data and use these probabilities to compute the coherence of new texts.
The coherence of a text T containing n sentences (S 1 ...S n ) is computed as:
P (T ) = n i=2 |S i | j=1 1 |S i−1 | |S i−1 | k=1 p(S j i |S k i−1 )
where S y x indicates the y th item of S x . Items are either productions or syntactic word unigrams depending on the representation. The conditional probabilities are computed with smoothing:
p(w j |w i ) = c(w i , w j ) + δ C c(w i ) + δ C * |V |
where w i and w j are syntactic items and c(w i , w j ) is the number of sentences that contain the item w i immediately followed by a sentence that contains w j . |V | is the vocabulary size for syntactic items.
Global structure
Now we turn to a global coherence approach that implements the assumption that sentences with similar syntax have the same communicative goal as well as captures the patterns in communicative goals in the discourse. This approach uses a Hidden Markov Model (HMM) which has been a popular implementation for modeling coherence (Barzilay and Lee, 2004;Fung and Ngai, 2006;Elsner et al., 2007). The hidden states in our model depict communicative goals by encoding a probability distribution over syntactic items. This distribution gives higher weight to syntactic items that are more likely for that communicative goal. Transitions between states record the common patterns in intentional structure for the domain.
In this syntax-HMM, states h k are created by clustering the sentences from the documents in the training set by syntactic similarity. For the productions representation of syntax, the features for clustering are the number of times a given production appeared in the parse of the sentence. For the d-sequence approach, the features are n-grams of size one to four of syntactic words from the sequence. Clustering was done by optimizing for average cosine similarity and was implemented using the CLUTO toolkit (Zhao et al., 2005). C clusters are formed and taken as the states of the model. Table 4 shows sentences from two clusters formed on the abstracts of journal articles using the productions representation. One of them, cluster (a), appears to capture descriptive sentences and cluster (b) involves mostly speculation type sentences.
The emission probabilities for each state are modeled as a (syntactic) language model derived from the sentences in it. For productions representation, this is the unigram distribution of productions from the sentences in h k . For d-sequences, the distribution is computed for bigrams of syntactic words. These language models use Lidstone smoothing with constant δ E . The probability for a sentence S l to be generated from state h k , p E (S l |h k ) is computed using these syntactic language models.
The transition probability p M from a state h i to state h j is computed as:
p M (h j |h i ) = d(h i , h j ) + δ M d(h i ) + δ M * C where d(h i )
is the number of documents whose sentences appear in h i and d(h i , h j ) is the number of documents which have a sentence in h i which is immediately followed by a sentence in h j . In addition to the C states, we add one initial h S and one final h F state to capture document beginning and end. Transitions from h S to any state h k records how likely it is for h k to be the starting state for documents of that domain. δ M is a smoothing constant. The likelihood of a text with n sentences is given by P
(T ) = h1...hn n t=1 p M (h t |h t−1 )p E (S t |h t ).
All model parameters-the number of clusters C, smoothing constants δ C , δ E , δ M and d for d-sequences-are tuned to optimize how well the model can distinguish coherent from incoherent articles. We describe these settings in Section 5.1.
Content and entity grid models
We compare the syntax model with content model and entity grid methods. These approaches are the most popular ones from prior work and also allow us to test the complementary nature of syntax with lexical statistics and entity structure. This section explains how we implemented these approaches.
Content models introduced by Barzilay and Lee (2004) and Fung and Ngai (2006) use lexically driven HMMs to capture coherence. The hidden states represent the topics of the domain and encode a probability distribution over words. Transitions between states record the probable succession of topics. We built a content model using our HMM implementation. Clusters are created using word bigram features after replacing numbers and proper names with tags NUM and PROP. The emissions are given by a bigram language model on words from the clustered sentences. Barzilay and Lee (2004) also employ an iterative clustering procedure before finalizing the states of the HMM but our method only uses one-step clustering. Despite the difference, the content model accuracies for our implementation are quite close to that from the original.
For the entity grid model, we follow the generative approach proposed by Lapata and Barzilay (2005). A text is converted into a matrix, where rows correspond to sentences, in the order in which they appear in the article. Columns are created one for each entity appearing in the text. Each cell (i,j) is filled with the grammatical role r i,j of the entity j in sentence i. We computed the entity grids using the Brown Coherence Toolkit 4 . The probability of the text (T ) is defined using the likely sequence of grammatical role transitions.
P (T ) = m j=1 n i=1 p(r i,j |r i−1,j ...r i−h,j )
for m entities and n sentences. Parameter h controls the history size for transitions and is tuned during development. When h = 1, for example, only the grammatical role for the entity in the previous sentence is considered and earlier roles are ignored.
Evaluating syntactic coherence
We follow the common approach from prior work and use pairs of articles, where one has the original document order and the other is a random permutation of the sentences from the same document. Since the original article is always more coherent than a random permutation, a model can be evaluated using the accuracy with which it can identify the original article in the pair, i.e. it assigns higher probability to the original article. This setting is not ideal but has become the de facto standard for evaluation of coherence models (Barzilay and Lee, 2004;Elsner et al., 2007;Barzilay and Lapata, 2008;Karamanis et al., 2009;Lin et al., 2011;Elsner and Charniak, 2011). It is however based on a reasonable assumption as recent work (Lin et al., 2011) shows that people identify the original article as more coherent than its permutations with over 90% accuracy and assessors also have high agreement. Later, we present an experiment distinguishing conference from workshop articles as a more realistic evaluation.
We use two corpora that are widely employed for coherence prediction (Barzilay and Lee, 2004;Elsner et al., 2007;Barzilay and Lapata, 2008;Lin et al., 2011). One contains reports on airplane accidents from the National Transportation Safety Board and the other has reports about earthquakes from the Associated Press. These articles are about 10 sentences long. These corpora were chosen since within each dataset, the articles have the same intentional structure. Further, these corpora are also standard ones used in prior work on lexical, entity and discourse relation based coherence models. Later in Section 6, we show that the models perform well on the academic genre and longer articles too.
For each of the two corpora, we have 100 articles for training and 100 (accidents) and 99 (earthquakes) for testing. A maximum of 20 random permutations were generated for each test article to create the pairwise data (total of 1986 test pairs for the accident corpus and 1956 for earthquakes). 5 The baseline accuracy for random prediction is 50%. The articles were parsed using the Stanford parser (Klein and Manning, 2003).
Accuracy of the syntax model
For each model, the relevant parameters were tuned using 10-fold cross validation on the training data. In each fold, 90 documents were used for training and evaluation was done on permutations from the remaining articles. After tuning, the final model was trained on all 100 articles in the training set. Table 5 shows the results on the test set. The best number of clusters and depth for d-sequences are also indicated. Overall, the syntax models work quite well, with accuracies at least 15% or more absolute improvement over the baseline.
In the local co-occurrence approach, both productions and d-sequences provide 72% accuracy for the accidents corpus. For the earthquake corpus, the accuracies are lower and the d-sequence method works better. The best depth setting for d-sequence is rather small: depth of main verb (MVP) + 2 (or 1), and indicates that a fairly abstract level of nodes is preferred for the patterns. For comparison, we also provide results using just the POS tags in the model and this is worse than the d-sequence approach.
The global HMM model is better than the local model for each representation type giving 2 to 38% better accuracies. Here we see a different trend for the d-sequence representation, with better results for greater depths. At such depths (8 and 9) below the main verb, the nodes are mostly POS tags.
Overall both productions and d-sequence work competitively and give the best accuracies when implemented with the global approach.
Comparison with other approaches
For our implementations of the content and entity grid models, the best accuracies are 71% on the accidents corpus and 85% on the earthquakes one, similar to the syntactic models.
Ideally, we would like to combine models but we do not have separate training data. So we perform the following classification experiment which combines the predictions made by different models on the test set. Each test pair (article and permutation) forms one example and is given a class value of 0 or 1 depending on whether the first article in the pair is the original one or the second one. The example is represented as an n-dimensional vector, where n is the number of models we wish to combine. For instance, to combine content models and entity grid, two features are created: one of these records the difference in log probabilities for the two articles from the content model, the other feature indicates the difference in probabilities from the entity grid.
A logistic regression classifier is trained to predict the class using these features. The test pairs are created such that an equal number of examples have Table 6: Accuracies for combined approaches class 0 and 1, so the baseline accuracy is 50%. We run this experiment using 10-fold cross validation on the test set after first obtaining the log probabilities from individual models. In each fold, the training is done using the pairs from 90 articles and tested on permutations from the remaining 10 articles. These accuracies are reported in Table 6. When the accuracy of a combination is better than that using any of its smaller subsets, the value is bolded.
We find that syntax supplements both content and entity grid methods. While on the airplane corpus syntax only combines well with the entity grid, on the earthquake corpus, both entity and content approaches give better accuracies when combined with syntax. However, adding all three approaches does not outperform combinations of any two of them. This result can be due to the simple approach that we tested for combination. In prior work, content and entity grid methods have been combined generatively (Elsner et al., 2007) and using discriminative training with different objectives (Soricut and Marcu, 2006). Such approaches might bring out the complementary strengths of the different aspects better and we leave such analysis for future work.
Predictions on academic articles
The distinctive intentional structure of academic articles has motivated several proposals to define and annotate the communicative purpose (argumentative zone) of each sentence (Swales, 1990;Teufel et al., 1999;Liakata et al., 2010). Supervised classifiers were also built to identify these zones (Teufel and Moens, 2000;Guo et al., 2011). So we expect that these articles form a good testbed for our models. In the remainder of the paper, we examine how unsupervised patterns discovered by our approach relate to zones and how well our models predict coherence for articles from this genre.
We employ two corpora of scientific articles. ART Corpus: contains a set of 225 Chemistry journal articles that were manually annotated for intentional structure (Liakata and Soldatova, 2008). Each sentence was assigned one of 11 zone labels: Result, Conclusion, Objective, Method, Goal, Background, Observation, Experiment, Motivation, Model, Hypothesis. For our study, we use the annotation of the introduction and the abstract sections. We divide the data into training, development and test sets. For abstracts, we have 75, 50 and 100 for these sets respectively. For introductions, this split is 75, 31, 82. 6 ACL Anthology Network (AAN) Corpus: Radev et al. (2009) provides the full text of publications from ACL venues. These articles do not have any zone annotations. The AAN corpus is produced from OCR analysis and no section marking is available. To recreate these, we use the Parscit tagger 7 (Councill et al., 2008). We use articles from years 1999 to 2011. For training, we randomly choose 70 articles from ACL and NAACL main conferences. Similarly, we obtain a development corpus of 36 ACL-NAACL articles. We create two test sets: one has 500 ACL-NAACL conference articles and another has 500 articles from ACL-sponsored workshops. We only choose articles in which all three sections-abstract, introduction and related work-could be successfully identified using Parscit. 8 This data was sentence-segmented using MxTerminator (Reynar and Ratnaparkhi, 1997) and parsed with the Stanford Parser (Klein and Manning, 2003).
For each corpus and each section, we train all our syntactic models: the two local coherence models using the production and d-sequence representations and the HMM models with the two representations. These models are tuned on the respective development data, on the task of differentiating the original from a permuted section. For this purpose, we created a maximum of 30 permutations per article.
Comparison with ART Corpus zones
We perform this analysis using the ART corpus. The zone annotations present in this corpus allow us to directly test our first assumption in this work, that sentences with similar syntax have the same communicative goal.
For this analysis, we use the the HMM-prod model for abstracts and the HMM-d-seq model for introductions. These models were chosen because they gave the best performance on the ART corpus development sets. 9 We examine the clusters created by these models on the training data and check whether there are clusters which strongly involve sentences from some particular annotated zone.
For each possible pair of cluster and zone (C i , Z j ), we compute c(C i , Z j ): the number of sentences in C i that are annotated as zone Z j . Then we use a chi-square test to identify pairs for which c(C i , Z j ) is significantly greater than expected (there is a "positive" association between C i and Z j ) and pairs where c(C i , Z j ) is significantly less than chance (C i is not associated with Z j ). A 95% confidence level was used to determine significance.
The HMM-prod model for abstracts has 9 clusters (named Clus0 to 8) and the HMM-d-seq model for introductions has 6 clusters (Clus0 to 5). The pairings of these clusters with zones which turned out to be significant are reported in Table 7. We also report for each positively associated cluster-zone pair, the following numbers: matches c(C i , Z j ), precision c(C i , Z j )/|C i | and recall c(C i , Z j )/|Z j |. The presence of significant associations validate our intuitions that syntax provides clues about communicative goals. Some clusters overwhelmingly contain the same zone, indicated by high precision, for example 64% of sentences in Clus2 from introduction sections are background sentences. Other clusters have high recall of a zone, 55% of all goal sentences from the abstracts training data is captured by Clus7. It is particularly interesting to see that Clus7 of abstracts captures both objective and goal zone sentences and for introductions, Clus4 is a mix of hypothesis and goal sentences which intuitively are closely related categories.
Original versus permuted sections
We also explore the accuracy of the syntax models for predicting coherence of articles from the test set of ART corpus and the 500 test articles from ACL-NAACL conferences. We use the same experimental setup as before and create pairs of original and permuted versions of the test articles. We created a maximum of 20 permutations for each article. The baseline accuracy is 50% as before.
For the ART corpus, we also built an oracle model of annotated zones. We train a first order Markov Chain to record the sequence of zones in the training articles. For testing, we assume that the oracle zone is provided for each sentence and use the model to predict the likelihood of the zone sequence. Results from this model represent an upper bound because an accurate hypothesis of the communicative goal is available for each sentence.
The accuracies are presented in Table 8. Overall, the HMM-d-seq model provides the best accuracies. The highest results are obtained for ACL introduction sections (74%). These results are lower than that obtained on the earthquake/accident corpus but the task here is much harder: the articles are longer and the ACL corpus also has OCR errors which affect sentence segmentation and parsing accuracies. When the oracle zones are known, the accuracies are much higher on the ART corpus indicating that the intentional structure of academic articles is very predictive of their coherence.
Conference versus workshop papers
Finally, we test whether the syntax-based model can distinguish the structure of conference from workshop articles. Conferences publish more complete and tested work and workshops often present preliminary studies. Workshops are also venues to discuss a focused and specialized topic. So the way information is conveyed in the abstracts and introductions would vary in these articles.
We perform this analysis on the ACL corpus and no permutations are used, only the original text of the 500 articles each in the conference and workshop test sets. While permutation examples provide cheap training/test data, they have a few unrealistic properties. For example, both original and permuted articles have the same length. Further some permutations could result in an outstandingly incoherent sample which is easily distinguished from the original articles. So we use the conference versus workshop task as another evaluation of our model.
We designed a classification experiment for this task which combines features from the different syntax models that were trained on the ACL conference training set. We include four features indicating the perplexity of an article under each model (Localprod, Local-d-seq, HMM-prod, HMM-d-seq). We use perplexity rather than probability because the length of the articles vary widely in contrast to the previous permutation-based tests, where both permutation and original article have the same length. We compute perplexity as P (T ) −1/n , where n is the number of words in the article. We also obtain the most likely state sequence for the article under HMM-prod and HMM-d-seq models using Viterbi decoding. Then the proportion of sentences from each state of the two models are added as features.
We also add some fine-grained features from the local model. We represent sentences in the training set as either productions or d-sequence items and compute pairs of associated items (x i , x j ) from adjacent sentences using the same chi-square test as in our pilot study. The most significant (lowest pvalues) 30 pairs (each for production and d-seq) are taken as features. 10 For a test article, we compute features that represent how often each pair is present in the article such that x i is in S m and x j is in S m+1 .
We perform this experiment for each section and there are about 90 to 140 features for the different sections. We cast the problem as a binary classification task: conference articles belong to one class and workshop to the other. Each class has 500 articles and so the baseline random accuracy is 50%. We perform 10-fold cross validation using logistic regression. Our results were 59.3% accuracy for distinguishing abstracts of conference verus workshop, 50.3% for introductions and 55.4% for related work. For abstracts and related work, these accuracies are significantly better than baseline (95% confidence level from a two-sided paired t-test comparing the accuracies from the 10 folds). It is possible that introductions in either case, talk in general about the field and importance of the problem addressed and hence have similar structure.
Our accuracies are not as high as on permutations examples because the task is clearly harder. It may also be the case that the prediction is more difficult for certain papers than for others. So we also analyze our results by the confidence provided by the classifier for the predicted class. We consider only the examples predicted above a certain confidence level and compute the accuracy on these predictions. 10 A cutoff is applied such that the pair was seen at least 25 times in the training data. These results are shown in Table 9. The proportion of examples under each setting is also indicated.
When only examples above 0.6 confidence are examined, the classifier has a higher accuracy of 63.8% for abstracts and covers close to 70% of the examples. Similarly, when a cutoff of 0.7 is applied to the confidence for predicting related work sections, we achieve 63.3% accuracy for 53% of examples. So we can consider that 30 to 47% of the examples in the two sections respectively are harder to tell apart. Interestingly however even high confidence predictions on introductions remain incorrect.
These results show that our model can successfully distinguish the structure of articles beyond just clearly incoherent permutation examples.
Conclusion
Our work is the first to develop an unsupervised model for intentional structure and to show that it has good accuracy for coherence prediction and also complements entity and lexical structure of discourse. This result raises interesting questions about how patterns captured by these different coherence metrics vary and how they can be combined usefully for predicting coherence. We plan to explore these ideas in future work. We also want to analyze genre differences to understand if the strength of these coherence dimensions varies with genre.
Figure 1 :
1Example for d-sequence representation
Table 1 :
1The first two sentences of two descriptive articles
Table 2 :
2Example sentences for preferred production sequences. The span of the LHS of the corresponding production is indicated by [] braces.
Table 3 :
3Top patterns in productions from WSJ
ADJP → JJ PP | VP → VBZ ADJP VP → VB VP | VP → MD VP [1] This method VP-[is ADJP-[capable of sequence-specific [1] Our results for the difference in reactivity VP-[can detection of DNA with high accuracy]-ADJP]-VP . VP-[be linked to experimental observations]-VP]-VP . [2] The same VP-[is ADJP-[true for synthetic polyamines [2] These phenomena taken together VP-[can VP-[be considered such as polyallylamine]-ADJP]-VP . as the signature of the gelation process]-VP]-VP .Cluster a
Cluster b
Table 4 :
4Example syntactic similarity clusters. The top two descriptive productions for each cluster are also listed.
Table 5 :
5Accuracies on accident and earthquake corporaModel
Accid. Earthq.
Content + Egrid
76.8
90.7
Content + HMM-prodn
74.2
95.3
Content + HMM-d-seq
82.1
90.3
Egrid + HMM-prodn
79.6
93.9
Egrid + HMM-d-seq
84.2
91.1
Egrid + Content + HMM-prodn
79.5
95.0
Egrid + Content + HMM-d-seq
84.1
92.3
Egrid + Content + HMM-prodn
83.6
95.7
+ HMM-d-seq
Table 7 :
7Cluster-Zone mappings on the ART Corpus
Table 8 :
8Accuracy in differentiating permutation from original sections on ACL and ART test sets.
Table 9 :
9Accuracy (% examples) above each confidence level for the conference versus workshop task.
Wikipedia articles on "Aqueduct" and "Cytokine Receptors"
(p1, p2) and (p2, p1) are considered as different pairs.
Coherence models using syntaxWe first describe the two representations of sentence structure we adopted for our analysis. 3 Next, we 3 Our representations are similar to features used for reranking in parsing. Our first representation corresponds to "rules" features(Charniak and Johnson, 2005;Collins and Koo, 2005), and our second representation is related to "spines"(Carreras et al., 2008) and edge annotation(Huang, 2008).
http://www.cs.brown.edu/~melsner/manual.html
We downloaded the permutations from http://people. csail.mit.edu/regina/coherence/CLsubmission/
Some articles did not have labelled 'introduction' sections resulting in fewer examples for this setup. 7 http://aye.comp.nus.edu.sg/parsCit/
We also exclude introduction and related work sections longer than 50 sentences and those shorter than 4 sentences since they often have inaccurate section boundaries.9 Their test accuracies are reported in the next section.
AcknowledgementsThis work is partially supported by a Google research grant and NSF CAREER 0953445 award.
Modeling local coherence: An entity-based approach. Regina Barzilay, Mirella Lapata, Computational Linguistics. 341Regina Barzilay and Mirella Lapata. 2008. Modeling local coherence: An entity-based approach. Computa- tional Linguistics, 34(1):1-34.
Catching the drift: Probabilistic content models, with applications to generation and summarization. Regina Barzilay, Lillian Lee, Proceedings of NAACL-HLT. NAACL-HLTRegina Barzilay and Lillian Lee. 2004. Catching the drift: Probabilistic content models, with applications to generation and summarization. In Proceedings of NAACL-HLT, pages 113-120.
Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. Xavier Carreras, Michael Collins, Terry Koo, Proceedings of CoNLL. CoNLLXavier Carreras, Michael Collins, and Terry Koo. 2008. Tag, dynamic programming, and the perceptron for efficient, feature-rich parsing. In Proceedings of CoNLL, pages 9-16.
Coarse-tofine n-best parsing and maxent discriminative reranking. Eugene Charniak, Mark Johnson, Proceedings of ACL. ACLEugene Charniak and Mark Johnson. 2005. Coarse-to- fine n-best parsing and maxent discriminative rerank- ing. In Proceedings of ACL, pages 173-180.
Utilizing extra-sentential context for parsing. C K Jackie, Gerald Cheung, Penn, Proceedings of EMNLP. EMNLPJackie C.K. Cheung and Gerald Penn. 2010. Utilizing extra-sentential context for parsing. In Proceedings of EMNLP, pages 23-33.
Segmentation and clustering of textual sequences: a typological approach. Christelle Cocco, Raphaël Pittier, Proceedings of RANLP. RANLPFrançois Bavaud, and Aris XanthosChristelle Cocco, Raphaël Pittier, François Bavaud, and Aris Xanthos. 2011. Segmentation and clustering of textual sequences: a typological approach. In Pro- ceedings of RANLP, pages 427-433.
Discriminative reranking for natural language parsing. Michael Collins, Terry Koo, Computational Linguistics. 31Michael Collins and Terry Koo. 2005. Discrimina- tive reranking for natural language parsing. Compu- tational Linguistics, 31:25-70.
Parscit: An open-source crf reference string parsing package. G Isaac, C Lee Councill, Min-Yen Giles, Kan, Proceedings of LREC. LRECIsaac G. Councill, C. Lee Giles, and Min-Yen Kan. 2008. Parscit: An open-source crf reference string parsing package. In Proceedings of LREC, pages 661-667.
Coreferenceinspired coherence modeling. Micha Elsner, Eugene Charniak, Proceedings of ACL-HLT, Short Papers. ACL-HLT, Short PapersMicha Elsner and Eugene Charniak. 2008. Coreference- inspired coherence modeling. In Proceedings of ACL- HLT, Short Papers, pages 41-44.
Extending the entity grid with entity-specific features. Micha Elsner, Eugene Charniak, Proceedings of ACL-HLT. ACL-HLTMicha Elsner and Eugene Charniak. 2011. Extending the entity grid with entity-specific features. In Pro- ceedings of ACL-HLT, pages 125-129.
A unified local and global model for discourse coherence. Micha Elsner, Joseph Austerweil, Eugene Charniak, Proceedings of NAACL-HLT. NAACL-HLTMicha Elsner, Joseph Austerweil, and Eugene Charniak. 2007. A unified local and global model for discourse coherence. In Proceedings of NAACL-HLT, pages 436-443.
One story, one flow: Hidden markov story models for multilingual multidocument summarization. Pascale Fung, Grace Ngai, ACM Transactions on Speech and Language Processing. 32Pascale Fung and Grace Ngai. 2006. One story, one flow: Hidden markov story models for multilingual multidocument summarization. ACM Transactions on Speech and Language Processing, 3(2):1-16.
Attention, intentions, and the structure of discourse. J Barbara, Candace L Grosz, Sidner, Computational Linguistics. 312Barbara J. Grosz and Candace L. Sidner. 1986. Atten- tion, intentions, and the structure of discourse. Com- putational Linguistics, 3(12):175-204.
A weakly-supervised approach to argumentative zoning of scientific documents. Yufan Guo, Anna Korhonen, Thierry Poibeau, Proceedings of EMNLP. EMNLPYufan Guo, Anna Korhonen, and Thierry Poibeau. 2011. A weakly-supervised approach to argumentative zon- ing of scientific documents. In Proceedings of EMNLP, pages 273-283.
Forest reranking: Discriminative parsing with non-local features. Liang Huang, Proceedings of ACL-HLT. ACL-HLTLiang Huang. 2008. Forest reranking: Discriminative parsing with non-local features. In Proceedings of ACL-HLT, pages 586-594, June.
Evaluating centering for information ordering using corpora. Nikiforos Karamanis, Chris Mellish, Massimo Poesio, Jon Oberlander, Computational Linguistics. 351Nikiforos Karamanis, Chris Mellish, Massimo Poesio, and Jon Oberlander. 2009. Evaluating centering for information ordering using corpora. Computational Linguistics, 35(1):29-46.
Accurate unlexicalized parsing. Dan Klein, Christopher D Manning, Proceedings of ACL. ACLDan Klein and Christopher D. Manning. 2003. Accurate unlexicalized parsing. In Proceedings of ACL, pages 423-430.
Automatic evaluation of text coherence: Models and representations. Mirella Lapata, Regina Barzilay, Proceedings of IJCAI. IJCAIMirella Lapata and Regina Barzilay. 2005. Automatic evaluation of text coherence: Models and representa- tions. In Proceedings of IJCAI.
Probabilistic text structuring: Experiments with sentence ordering. Mirella Lapata, Proceedings of ACL. ACLMirella Lapata. 2003. Probabilistic text structuring: Ex- periments with sentence ordering. In Proceedings of ACL, pages 545-552.
Guidelines for the annotation of general scientific concepts. Maria Liakata, Larisa Soldatova, JISC Project Report. Maria Liakata and Larisa Soldatova. 2008. Guidelines for the annotation of general scientific concepts. JISC Project Report.
Corpora for the conceptualisation and zoning of scientific papers. Maria Liakata, Simone Teufel, Advaith Siddharthan, Colin Batchelor, Proceedings of LREC. LRECMaria Liakata, Simone Teufel, Advaith Siddharthan, and Colin Batchelor. 2010. Corpora for the conceptualisa- tion and zoning of scientific papers. In Proceedings of LREC.
Recognizing implicit discourse relations in the Penn Discourse Treebank. Ziheng Lin, Min-Yen Kan, Hwee Tou Ng, Proceedings of EMNLP. EMNLPZiheng Lin, Min-Yen Kan, and Hwee Tou Ng. 2009. Recognizing implicit discourse relations in the Penn Discourse Treebank. In Proceedings of EMNLP, pages 343-351.
Automatically evaluating text coherence using discourse relations. Ziheng Lin, Min-Yen Hwee Tou Ng, Kan, Proceedings of ACL-HLT. ACL-HLTZiheng Lin, Hwee Tou Ng, and Min-Yen Kan. 2011. Au- tomatically evaluating text coherence using discourse relations. In Proceedings of ACL-HLT, pages 997- 1006.
Building a large annotated corpus of english: The penn treebank. Mitchell P Marcus, Beatrice Santorini, Mary Ann Marcinkiewicz, Computational Linguistics. 192Mitchell P. Marcus, Beatrice Santorini, and Mary Ann Marcinkiewicz. 1994. Building a large annotated cor- pus of english: The penn treebank. Computational Linguistics, 19(2):313-330.
Revisiting readability: A unified framework for predicting text quality. Emily Pitler, Ani Nenkova, Proceedings of EMNLP. EMNLPEmily Pitler and Ani Nenkova. 2008. Revisiting read- ability: A unified framework for predicting text qual- ity. In Proceedings of EMNLP, pages 186-195.
A Bibliometric and Network Analysis of the field of Computational Linguistics. R Dragomir, Mark Thomas Radev, Bryan Joseph, Pradeep Gibson, Muthukrishnan, Journal of the American Society for Information Science and Technology. Dragomir R. Radev, Mark Thomas Joseph, Bryan Gib- son, and Pradeep Muthukrishnan. 2009. A Bibliomet- ric and Network Analysis of the field of Computational Linguistics. Journal of the American Society for Infor- mation Science and Technology.
Priming of Syntactic Rules in Task-Oriented Dialogue and Spontaneous Conversation. David Reitter, Johanna D Moore, Frank Keller, Proceedings of the 28th Annual Conference of the Cognitive Science Society. the 28th Annual Conference of the Cognitive Science SocietyDavid Reitter, Johanna D. Moore, and Frank Keller. 2006. Priming of Syntactic Rules in Task-Oriented Dialogue and Spontaneous Conversation. In Proceed- ings of the 28th Annual Conference of the Cognitive Science Society, pages 685-690.
A maximum entropy approach to identifying sentence boundaries. C Jeffrey, Adwait Reynar, Ratnaparkhi, Proceedings of the fifth conference on Applied natural language processing. the fifth conference on Applied natural language processingJeffrey C. Reynar and Adwait Ratnaparkhi. 1997. A maximum entropy approach to identifying sentence boundaries. In Proceedings of the fifth conference on Applied natural language processing, pages 16-19.
Discourse generation using utility-trained coherence models. Radu Soricut, Daniel Marcu, Proceedings of COLING-ACL. COLING-ACLRadu Soricut and Daniel Marcu. 2006. Discourse gener- ation using utility-trained coherence models. In Pro- ceedings of COLING-ACL, pages 803-810.
Genre analysis: English in academic and research settings. John Swales, Cambridge University Press11John Swales. 1990. Genre analysis: English in academic and research settings, volume 11. Cambridge Univer- sity Press.
What's yours and what's mine: determining intellectual attribution in scientific text. Simone Teufel, Marc Moens, Proceedings of EMNLP. EMNLPSimone Teufel and Marc Moens. 2000. What's yours and what's mine: determining intellectual attribution in scientific text. In Proceedings of EMNLP, pages 9- 17.
An annotation scheme for discourse-level argumentation in research articles. Simone Teufel, Jean Carletta, Marc Moens, Proceedings of EACL. EACLSimone Teufel, Jean Carletta, and Marc Moens. 1999. An annotation scheme for discourse-level argumen- tation in research articles. In Proceedings of EACL, pages 110-117.
Hierarchical clustering algorithms for document datasets. Ying Zhao, George Karypis, Usama Fayyad, Data Mining and Knowledge Discovery. 10Ying Zhao, George Karypis, and Usama Fayyad. 2005. Hierarchical clustering algorithms for docu- ment datasets. Data Mining and Knowledge Discov- ery, 10:141-168. |
218,974,485 | [] | Decode with Template: Content Preserving Sentiment Transfer
May 2020
Zhiyuan Wen
Jiannong Cao
Ruosong Yang
Senzhang Wang szwang@nuaa.edu.cn
College of Computer Science and Technology
Nanjing University of Aeronautics and Astronautics † Kowloon
Hong KongChina
NanjingChina
Department of Computing
The Hong Kong Polytechnic University
Decode with Template: Content Preserving Sentiment Transfer
Proceedings of the 12th Conference on Language Resources and Evaluation (LREC 2020)
the 12th Conference on Language Resources and Evaluation (LREC 2020)MarseilleMay 20204671Text GenerationSentiment AnalysisBidirectionally Guided Decoding
Sentiment transfer aims to change the underlying sentiment of input sentences. The two major challenges in existing works lie in (1) effectively disentangling the original sentiment from input sentences; and (2) preserving the semantic content while transferring the sentiment. We find that identifying the sentiment-irrelevant content from input sentences to facilitate generating output sentences could address the above challenges and then propose the Decode with Template model in this paper. We first mask the explicit sentiment words in input sentences and use the rest parts as templates to eliminate the original sentiment. Then, we input the templates and the target sentiments into our bidirectionally guided variational auto-encoder (VAE) model to generate output. In our method, the template preserves most of the semantics in input sentences, and the bidirectionally guided decoding captures both forward and backward contextual information to generate output. Both two parts contribute to better content preservation. We evaluate our method on two review datasets, Amazon and Yelp, with automatic evaluation methods and human rating. The experimental results show that our method significantly outperforms state-of-the-art models, especially in content preservation.
Introduction
Sentiment transfer for text considers the semantics of a sentence in two aspects: the sentiment information, and the content independent to sentiment information 1 . This task aims to change the underlying sentiment of the input text and simultaneously retain the content. As an example shown in Table 1., only the attitude to the restaurant in the review is changed, while the sentiment-independent content about the restaurant is preserved. This task requires to generate sentences that (1) conform to the target sentiments, (2) preserve the semantic content of the input sentences, and (3) be fluent and readable (Jin et al., 2019). It connects sentiment analysis and Natural Language Generation (Zhang et al., 2018a) and facilitates a lot of NLP applications such as fighting against offensive language in social media (Santos et al., 2018), news rewriting, and building controllable dialogue systems. However, this task is difficult in practice due to the lack of parallel data (sentences with similar content but different sentiments). Several recent works (Shen et al., 2017;Hu et al., 2017;Yang et al., 2018;John et al., 2018) try to disentangle sentiments from content by assuming the texts are generated conditioned on two latent distributional representations: one with only content information, and the other with only sentiment information. Most of them focus on changing the sentiment yet fail to keep the content. The reason is that distributional disentanglement needs the latent representations of sentiment and content to be orthogonal or independent. However, it is hard to guarantee that each representation contains only the corresponding information. Therefore, reconstruction from these two parts directly might cause confliction in both content and sentiment aspects, which leads to poor performance in content preser-vation.
Positive to negative sentiment transfer Input:
I love this place , the service is always great! Output: I hate this place, the service is bad. Table 1: An example of sentiment transfer. The input sentence is a review of restaurant service with positive sentiment. The sentiment transfer model changes the input to a negative review but preserves the sentiment-free content.
Instead of modifying the sentiment only in latent distributional space, we consider this task as a combination of instance-level modification with semantic generation and propose our method: Decode with Template. In our model, we adapt the variational autoencoder (VAE) (Kingma and Welling, 2013) by using bidirectional Gated Recurrent Units RNNs (GRU) (Cho et al., 2014) for both the encoder and the decoder. Inspired by Zhang et al., 2018b;Wu et al., 2019), we first generate the templates by masking all the sentiment words to eliminate the original sentiment information in input sentences. Then, the templates are fed into the encoder to get the semantic content representations. Next, we modify the templates by replacing the masked sentiment words with the target sentiment representations we got from the sentiment memory we build. Finally, we input the content representations together with the modified templates into our bidirectional decoder to generate output sentences. To improve the model ability to generate sentences rendering target sentiments, we also use a sentiment classifier to perform an adversarial training. In our method, the templates can well preserve the semantic content of input sentences. Besides, the latent representations from the encoder robustly capture the semantic infor-mation. By using the bidirectional GRU as the decoder and the modified templates as its partial input, both forward and backward contextual information can be captured for better preserving the content of the input sentences. Besides, the bidirectionally guided decoding also prevents the error accumulation in the unidirectional autoregressive RNN language models based decoder, which is commonly used in many previous works (Hu et al., 2017;Shen et al., 2017;Fu et al., 2018;Li et al., 2018). Moreover, we use the target sentiment representations to modify the templates, and thus the sentiment information and contextual information can be integrated for generation. Our method combines instance-level modification and semantic generation and thus achieves better content preservation and naturalness for the output sentences.
To demonstrate the effectiveness of our approach, we conduct experiments on two review datasets, Amazon and Yelp. We evaluate the performance in both automatic metrics and human evaluation from three aspects: sentiment transfer intensity, content preservation, and naturalness (Mir et al., 2019). Results show that our method significantly outperforms state-of-the-art models. Besides, we also conduct ablation study to show how each component in our method affects the overall performance. We summarize our contributions as follows:
• We propose the Decode with Template model that combines instance-level modification with semantic generation for the sentiment transfer task without parallel data. • We innovatively use the modified templates and enables a bidirectionally guided decoder, which captures both forward and backward context in decoding and prevents the error accumulation in unidirectional autoregressive RNN decoder. Also, the bidirectionally guided decoding could be easily adapted to many other modification and generation tasks. • The proposed method significantly outperforms stateof-the-art approaches in public sentiment transfer datasets, especially in content preservation.
Related Works
The problem of sentiment transfer is a special issue of text style transfer, which requires to transfer the original text styles of sentences into desired ones. Since there is little text data with explicit style labels, most previous researches regard sentiment as a kind of text style and focus on sentiment transfer due to the abundant data and research in sentiment analysis. Earlier works modify text styles in a semantic disentanglement way. (Hu et al., 2017) first proposed a neural generative model that combines VAEs and style discriminators for the forceful imposition of style and semantic structures. (Shen et al., 2017) assume that two corpora of sentences share the same distribution of content albeit rendered in different styles. They hence separate styles from semantic content by mapping the input sentences to its pure content representation, and then pass the representation to specified style-dependent decoders for rendering. (Fu et al., 2018) extended the above ideas by using an adversarial network to discourage encoding style information into the content representations. Though it is intuitive to separate style and content in semantic space, their works did not perform well in content preservation and rendering target styles due to the impure disentanglement.
To better preserve the content, (Prabhumoye et al., 2018;Jin et al., 2019) use the back-translation techniques borrowed from neural machine translation and obtain reasonable performance yet turn out complicated in practice. proposed the TemplateBased method only to modify the sentiment words in input sentences, which is easy to operate yet leads to poor naturalness. To endow the target styles into the output sentences, also propose to concatenate the sentiment embeddings with semantic representations for decoding. Differently, (Lample et al., 2019) use multiple attribute embeddings as the start-of-sequence( SOS ) input to the decoder in generation. Both the above methods use style attribute as partial decoder input. Besides, (Prabhumoye et al., 2018) use different style discriminators to guide the generation adversarially.
For the output generation, most of the previous works use unidirectional RNN-like decoder due to its excellent performance in text generation. However, the error accumulation caused by only using historical contextual information to generate the next words autoregressively is unignorable. Compared to the works above, the main innovation of our method is that we refrain separation in semantic space by combining semantic generation with instance-level modification, so that achieves better content preservation.
Decode with Template
In this section, we will first formalize our problem definition, then present an overview of the proposed Decode with Template model. Then we will introduce how to generate the templates, and how to modify the templates with desired sentiments. The adapted bidirectionally guided VAE model will be elaborated next. Finally, we will introduce the adversarial training with sentiment classifier and the overall loss.
Problem Statement
The studied problem is formally defined as follows. Given a set of sentences with sentiment labels X = {(x 1 , y 1 ), ..., (x n , y n )}, where x i is a sentence whose sentiment label (either "positive" or "negative") is indicated by y i , the goal is to build a model that can generate a readable sentencex i rendering the sentimentŷ i opposite to y i , and at the same time preserving the content of x i .
Model Overview
As shown in Figure 1, the Decode with Template model contains four parts. For each input sentence, we first mask the sentiment words to generate a template without sentiment information. Then we input the template into the encoder (the left part of Figure 1) to learn the content representation. Next, we modify the template by replacing the masked words with the target sentiment representations (the right lower part of Figure 1). Finally, we feed both the learned semantic content representation and the modified template into the decoder to generate the output sentences (the right upper part of Figure 1). During model training, a sentiment classifier is also used as the discriminator to enhance the model ability to generate sentences that render the target sentiment in an adversarial learning way. Our model can be formalized as below:
temp i = F mask (x i , y i ) z i = E(temp i ) temp i = F modif y (temp i ,ŷ i ) x i = G(temp i , z i )(1)
where F mask is a function that utilizes an external sentiment lexicon to replace the sentiment words in each input sentence x i with a token " neutral ". temp i is the template contains only the semantic content words of x i . E is the encoder that takes temp i as input, and generates the content representation z i . F modif y is a function to modify the sentiment independent template temp i totemp i with the target sentiment representations ofŷ i . G is the bidirectional decoder andx i is the output sentence rendering the target sentimentŷ i . In the following chapters, we will introduce our method in detail.
Template Generation
We generate the templates that preserve the semantic content by masking all the sentiment words in the input sentences. shows that masking sentiment words is a simple yet effective way to eliminate the sentiment information since the sentiment of a sentence is usually expressed by explicit sentiment words. We use a sentiment lexicon that consists of 5106 negative words and 2759 positive words provided by (Zeng et al., 2018) to detect the sentiment words in input sentences. We use this lexicon because it combines two classical lexicons in sentiment analysis: the Subjectivity Lexicon (Wilson et al., 2005) and the Opinion lexicon (Hu and Liu, 2004). The sentiment words in each sentence are detected by identifying whether the stem of each word exists in the stemmed sentiment lexicon. This comparison can effectively eliminate the influence of tense and voice. After the detection, we then mask the sentiment words in each sentence with a token " neutral " and keep other words fixed to obtain the templates.
Template Modification
Next, we modify the generated templates to endow the desired sentiments by replacing the token " neutral " with the representations of the target sentiments. Note that the sentiment representations should be suitable for the semantic content of the template. Thus the modification should be combined with the contextual information of the template with target sentiment information. Inspired by (Sukhbaatar et al., 2015) and (Zhang et al., 2018a), we use the lexicon described previously as a sentiment memory to generate suitable sentiment representations for each template. Formally, for each sentence template temp = {t 1 , t 2 , ..., t n } where t i is the unmasked word and a target sentimentŷ, the corresponding sentiment representation rep(ŷ) can be obtained by:
rep(ŷ) = 1 n * mŷ n i=1 mŷ j=1 match(ti, sentŷ j )sentŷ j (2)
where sentŷ j is a sentiment word in the lexicon with the labelŷ, and mŷ is the number of these sentiment words. n is the number of unmasked words in the template. match calculates the match score between t i and sent j as the averaging weight for each sentŷ j , here we use the cosine similarity between their representations. Intuitively, we use the average of all the t i as the overall semantic representation of temp, and then extract suitable sentiment information with the Attention mechanism. The reason we use this method is that the average of word vectors preserve the contextual similarity with the sentiment words, and also to some extent preserve the semantics of the templates as the sentence embedding (Arora et al., 2016).
Bidirectionally Guided VAE Model
We complement the vanilla sentence-VAE model (Bowman et al., 2015) by using bidirectional GRU for both the encoder and the decoder, because the latent feature from the encoder as content representation captures semantic information robustly. Besides, the bidirectionally guided decoding utilizes both forward and backward contextual information, and better preserves the content.
Content Encoding
We assume that the content of sentences with both positive and negative sentiment share the same latent semantic space. So, our model first imposes a prior distribution p( z) on the content in the semantic space, and then assumes that the content representations z for both positive and negative sentiment could be sampled from p( z). For each sentence x, our model takes its template temp with the original sentiment words masked as the encoder input, projecting it into a unique region in the semantic space. Formally, the region is a learned posterior distribution q( z|temp) described by the mean µ and the standard deviation σ. Then, the content representation z could be sampled from the region. Not only all the samples in the region contain similar semantic information, but the training process also forces our model to decode plausible sentences from each sample robustly.
Bidirectionally Guided Decoding
With the content representation z, we next conduct a bidirectionally guided decoding to generate output sentences from z by using the modified templates (previously introduced) as partial decoder input. During generation, the decoder receives z and the modified templatetemp as the input.temp contains possible future words in the context that enables bidirectionally guided decoding. Formally, In i-th decoding step, the decoder cell is conditioned on the i-th input word fromtemp, as well as the bidirectional hidden states h i = [h f i , h b i ] to generate the output word. Where the h f i refers to the forward, and the h b i refers to the backward. The strength of bidirectionally guided decoding lies in two aspects. First, it captures both forward and backward contextual information to preserve the semantic content better. Figure 1: Model illustration with an example. The input sentence is with positive sentiment. The model first detects "love" and "great" as positive words, and then masks them with " neutral " and keeps other words fixed. Then the masked input is fed in the encoder to get the mean µ and the standard deviation σ describing the semantic distribution. Then z is sampled from the distribution as the content representation without sentiment information. The decoder receives z as well as the modified template where all the " neutral "s are replaced by the negative representations. Finally, the decoder generates a sentence with the negative sentiment, while the content is similar to the input.
Second, it prevents the error accumulation and relieves the non-linearities being prone to gradient vanishing (Mao et al., 2019) caused by autoregressive RNN decoder. Moreover, since the target sentiment representations are fed into the decoder through the modified templates, the target sentiment integrated could influence each decoding step to output more natural sentences.
Training Loss
The general target of our model is to generate plausible sentences conditioned on the content representations and modified templates with specified sentiments. Since parallel data is unreachable, during training, the model is to reconstruct the input sentences with the original sentiment. After training, the aim changes to generate sentences that preserve the original content and render the opposite sentiment. Therefore, there are two objectives during training: (1) to learn a posterior distribution q θ ( z|temp) close to the prior p( z), which is supervised using the Kullback-Leibler (KL) divergence (Kullback and Leibler, 1951) penalty, and (2) to reconstruct the input sentences x from the content representation z conditioned on the original sentiment y. Formally, the training is to minimize the loss:
L vae (θ) = − λ kl KL(q θ ( z|temp)||p( z)) + E q θ ( z|temp) [logp θ (x|y, z)](3)
where θ is the model parameters to be learned. p( z) is the prior set which can be a standard Gaussian (µ = 0, σ = 1), and q θ ( z|temp) is the posterior taking the form N (µ, diagσ), where µ and σ are generated from the template encoder. logp θ is the negative logloss of reconstructing x. λ kl is the adaptive parameter to balance the reconstruction loss E and the KL penalty. We conform to the annealing method proposed in (Bowman et al., 2015) to calculate λ kl by:
λ kl = sigmoid(−k * step − step 0 )(4)
where step is the number of current training batches, and k and step 0 are the hyper-parameters.
Adversarial Training with Sentiment Classifier
To further guide the generated sentences to render the target sentiments, we also conduct an adversarial training by back-propagating the sentiment classification loss for the generated sentences to the decoder. We use a Convolutional Neural Network (CNN) as the sentiment classifier, and minimize the Cross-Entropy loss L sent below during training:
L sent = − n i=1 y i log(f (x i )) +(1 − y i )log(1 − f (x i ))(5)
where for each generated x i , y i is the sentiment label (1 for "positive", 0 for "negative"), and f (x i ) is the probability of x i rendering positive sentiment. However, due to the discreteness of the generated text, the gradients of sentiment classification loss could not be directly propagated from the classifier to our VAE model. In existing works, (Yu et al., 2017) solve the similar problem using Policy Gradient (Sutton et al., 2000), which turns out to suffer from high variance. Besides, (Hu et
p i = exp((logπ i + g i )/τ ) V j=1 exp((logπ j + g j )/τ(6)
where π i is the probability from Softmax of choosing the i-th word, V is the length of the vocabulary. g i is the noise independently sampled from Gumbel(0, 1). τ is a temperature parameter, and we use an annealing strategy to update it during training. The initial value of τ is set to 1.0, and it would decay to τ exp(−bn * 0.00003) after every 100 batches until reaching the minimum value of 0.1. bn is the batch number. After we got p i for each word consist of the generated sentences x i in Equ 5, each word w i in x i = [w 1 , ..., w n ] is obtained by:
w i = V j=1 p j embed j (7)
where embed j is the pre-trained word vector for the j-th word in the whole vocabulary.
Overall Objective
To combine the above described two partial losses together, the overall objective is to minimize the following loss function:
L = αL vae (θ) + βL sent(8)
where α and β are weight hyper-parameters to balance the two losses, respectively.
Experiments
Dataset
We evaluate our model by conducting experiments on Yelp and Amazon reviews datasets (Table 2) released by . The sentences in Yelp dataset are reviews about restaurants and movies. While in Amazon dataset, the reviews are about online shopping products (He and McAuley, 2016). Each sentence in these two datasets is labeled as having either a positive or negative sentiment. Both datasets are randomly split into train, validation, and test sets.
Experiment Setup
We use single-layer bidirectional GRU neural networks for both encoder and decoder with the hidden dimension of 200, and the dimension of input word embeddings to be 300. The word embeddings used for model input and sentiment representations generation are pre-trained GloVe (Pennington et al., 2014) word vectors. We use a batch size of 32 for input sentences. The k and step 0 to calculate λ kl are set to 0.0025 and 2500, respectively. The α and β to balance the two partial losses are set to 0.4 and 0.5. We use Adam (Kingma and Ba, 2014) optimization algorithm to train our VAE model and the Adabound (Luo et al., 2019) to train the CNN sentiment classifier. The initial learning rate is set to 0.001 for both models. Other hyper-parameters are chosen by grid search based on the performance on the validation set.
Baselines
We compare our method with the following five representative state-of-the-art approaches as the baseline models.
Cross-Alignment Auto-Encoder (CAAE):
This apporach is proposed in (Shen et al., 2017). It leverages refined alignment of latent representations in the hidden layers to perform text style transfer.
Control and Generation (CtrlGen):
This apporach is proposed by (Hu et al., 2017). CtrlGen combines the variational auto-encoders and different style discriminators for the effective imposition of style and semantic structures.
TemplateBased: This approach simply delete the original sentiment words in each input sentence as a template, then fill in with selected target sentiment words .
DeleteAndRetrieve: This method is also proposed in . It combines the template above with retrieved suitable target sentiment words as the input, then generates output sentences through a Seq2seq RNN model. Back-translation for Style Transfer (BST): This model is proposed in (Prabhumoye et al., 2018). It uses backtranslation to preserve content and style-specific generators to render target styles. We regard CAAE and CtrlGen in distributional disentanglement way, TemplateBased and DeleteAndRetrieve as instance-level modification and BST in back-translation way.
Automatic Evaluation
We report our results on the test sets of automatic evaluation in two aspects: the sentiment transfer intensity and the content preservation. For sentiment transfer intensity, we use the classification accuracy (ACC) for output sentences from a pre-trained TextCNN model as described in (Kim, 2014). After finetuning, it achieves nearly perfect accuracy of 97.6% on our dataset. For content preservation, we first compute the BLEU (Papineni et al., 2002) score between output sentences and human references (provided by as ground truth). Besides, we also use the Word Mover's Distance (WMD) to calculate the minimum "distance" between word embeddings of output and human references, where a Table 3: Automatic evaluation result smaller distance signifies a higher similarity (Kusner et al., 2015). The human references are sentence pairs with opposite sentiments but the same contents, manually modified by workers. The results are shown in Table 3. Higher ACC and BLEU score means better performance, while smaller WMD signifies better content preserving. One can see from Table 3 that our method achieves the best overall performance on both datasets, especially in content preservation. The BLEU score is largely improved from 20.9 to 25.2 in Yelp dataset. Distributional disentanglement methods CAAE and CtrlGen achieve lower performance in preserving content, mainly because of the impure disentanglement in latent space. The performance of BST signifies that back-translation is an effective method to capture content information. Also, TemplateBased achieves competitive performance, which shows the advantage of instance-level modification to preserve content. Our method achieving the best performance demonstrates the effectiveness of the combination of instance-level modification and semantic generation.
Human Evaluation
To capture more aspects of the performance on this task, we also conduct a human evaluation of the generated results. We follow the evaluation method proposed by (Mir et al., 2019) to obtain human ratings on sentiment transfer intensity, content preservation, and naturalness. We randomly select 100 sentences from the test set and then collect the transfer results for each approach. Each rater is given a questionnaire consisting of 100 questions. For each question, the rater is asked to rank six transfer results (by 1-5, 5 means the best performance) corresponding to the input sentence in the three aspects above. We asked four raters to give their annotations. To make the result more convincing, we also calculate the inter-rater agreement according to (Krippendorff, 2018). The agreement on our raters is 0.70, 0.78, 0.69 for transfer intensity, content preservation, and naturalness, respectively. We average the human rating in each evaluation metric, and the result is shown in Table 5: Ablation study result stantially the best results in all three aspects. It is worth mentioning that our method outperforms all baseline models in Naturalness. A possible explanation that is we use bidirectional decoder as well as templates for generation, which provides more contextual information. Although
TemplateBased simply replace words and shows poor Naturalness, the explicit sentiment words in their result contribute to considerable performance in sentiment transfer intensity.
Other methods CAAE, CtrlGen, DeleteAndRetrive and BST use autoregressive RNN decoders for generation also output readable (fairly good the Naturalness) sentences, yet insufficiently preserve semantic content. It mainly because the error accumulation in decoding brings deviation to the original contents.
Ablation Study
We conduct ablation study to evaluate the contribution of three important components (modified template, content representation, and the adversarial training with the sentiment classifier) in our approach. We remove each component from our model independently to see the influence of the performance on different aspects. The result is shown in Table 5. We first remove the modified templates from decoder in-put, the BLEU score descends dramatically from 25.2 to 4.5 on Yelp and from 27.9 to 3.7 on Amazon dataset. Also, the WMD has a tremendous rise around three to four times on both datasets. It indicates the template plays a vitally important role for content preservation in the bidirectionally guided decoding. Since removing the template also removes the target sentiment representations, we do not show the results of the sentiment transfer accuracy. We next independently disable the semantic representation by set it to random, causing a substantial reduction of BLEU on both datasets. It suggests that the semantic representation is also essential to preserve content. However, the lack of semantic representation brings little decrease in sentiment transfer accuracy. It is because we endow target sentiments by directly modifying the templates. Finally, we remove the loss L sent to eliminate the supervision from the sentiment classifier during training, finding that the sentiment transfer accuracy goes down remarkably. It verifies that the adversarial training does help the generated sentences render target sentiments.
To sum up, the modified template is a critical component to enhance decoding for content preservation. Also, the supervision from the adversarial training mainly contributes to successful transferring the sentiment.
Evaluation of Lexicons Usage
Since our method utilizes the external lexicon to facilitate both template generation and template modification, it is also important to evaluate the impact of the lexicon sizes. We randomly select 25%, 50%, 75%, 100% from both positive and negative words in the lexicons we use to compare the performance of our model in sentiment transfer accuracy. Below in Table 6 is the average performance of 10 times running. We can see that as the size of the lexicon grows, the sentiment transfer accuracy in both Yelp and Amazon datasets are also improved moderately. When we use 25% and 50% of the lexicon, the accuracies are close; while when we increase to 75%, there is a considerable improvement. It suggests a comprehensive lexicon does provide more sufficient sentiment information in our method.
Case Study
We further analyze the output sentences from our method and sampled seven pairs shown in Table 7. For the sentences with explicit sentiment words, our approach could effectively change them, resulting in word replacement (e.g. "worst" to "best") or adding negation words (e.g. "very helpful" to "not helpful at all"). Our method can
Positive to Negative they bring it out front for you and are very helpful. they bring it out front for you and are not helpful at all.
they pay very much attention to customers! they rush and do n't pay attention to their customers.
i love italian and i eat here often. i hate italian and i do n't eat here.
Negative to Positive the marinara sauce had no flavor. the marinara sauce is so flavorful.
the chocolate cake was the worst i had eaten in a while. the chocolate cake was one of the best desserts i 've ever had.
the food was pretty bad , i would not go there again. the food was pretty good i would definitely go there again.
the queen bed was horrible the queen bed made my day Table 7: Example result sentences. The first lines are input sentences, and the second lines are output sentences from our model. also transfer the underlying sentiment without explicit sentiment words by rendering the target sentiment integrated with semantic content, such as converting "pay very much attention" to "rush and do not pay attention" to describe the waiters, or "horrible" to "made my day". Transferring the underlying sentiments would also inevitably change the sentiment related actions in the semantic content. For example, transferring "i love italian and i eat here often" to "i hate italian and i don't eat here" also changes the frequency the user go to the Italian restaurant. However, it is still acceptable that the two sentences both describe the attitude to the restaurant. Moreover, as a sacrifice of content preservation, our method does not bring much variance in sentence structures.
Conclusions and Future Work
In this paper, we focus on the content preservation in sentiment transfer task and propose the Decode with Template model to effectively modify the underlying sentiment of input sentences. We use the template where the explicit sentiment words are modified as decoder input, so that enables a bidirectionally guided decoding to capture both forward and backward contextual information to generate output. Our method effectively preserves the semantic content and naturalness for output sentences. Besides, the proposed bidirectionally guided decoding could be generally adapted in other text modification and generation tasks. We conduct experiments on two review datasets, and the results show our approach significantly outperforms state-of-theart methods, especially in content preservation. The ablation study also shows the importance of the templates in decoding to preserve semantic content. We consider our work also to be an application to the sentiment lexicon, so, for future work, we plan to explore the construction of different style lexicons, so that our method could be utilized in more general text style transfer tasks. Also, we are interested in extending our method to other text modification tasks, like lexical correction and writing polishing.
Acknowledgement
This work is supported by PolyU Teaching Development with project code 1.61.xx.9A5V and RGC Collaborative Research Fund 2018/19 with project code C6030-18G.
Table 2 :
2Statistics of Yelp and Amazon datasets approximate one-hot vector for selecting words for output sentences. The probability p i for i-th word is calculated by:
Table 4 .
4Our method achieves sub-Yelp
Sentiment Content Naturalness
CAAE
2.379
1.605
2.506
CtrlGen
3.445
1.764
2.730
TemplateBased
3.304
3.998
2.489
DeleteAndRetrieve
2.501
3.584
3.500
BST
2.437
3.453
3.565
Our method
3.449
4.173
3.709
Amazon
Sentiment Content Naturalness
CAAE
2.643
1.455
2.834
CtrlGen
3.055
2.631
3.001
TemplateBased
3.273
3.400
2.340
DeleteAndRetrieve
2.309
3.220
3.554
BST
2.803
3.661
3.150
Our method
3.221
3.845
3.669
Table 4 :
4Human evaluation resultYelp
Accuracy BLEU WMD
Our method
0.930
25.2
3.126
w/o Template
-
4.5
10.343
w/o Content Rep.
0.912
17.6
5.617
w/o Adversarial Training
0.884
22.2
3.170
Amazon
Accuracy BLEU WMD
Our method
0.752
27.9
3.281
w/o Template
-
3.7
13.600
w/o Content Rep.
0.751
20.1
4.399
w/o Adversarial Training
0.712
24.5
3.390
Lexicon size ACC in Yelp ACC in Amazon1967 (25%)
0.885
0.737
3933 (50%)
0.878
0.719
5899 (75%)
0.922
0.752
7865 (100%)
0.930
0.752
Table 6 :
6Comparison between using different lexicon sizes in sentiment transfer accuracy
Henceforth, we use content to denote content independent to sentiment information for simplicity.
A simple but tough-to-beat baseline for sentence embeddings. S Arora, Y Liang, T Ma, Arora, S., Liang, Y., and Ma, T. (2016). A simple but tough-to-beat baseline for sentence embeddings.
Generating sentences from a continuous space. S R Bowman, L Vilnis, O Vinyals, A M Dai, R Jozefowicz, S Bengio, arXiv:1511.06349arXiv preprintBowman, S. R., Vilnis, L., Vinyals, O., Dai, A. M., Jozefowicz, R., and Bengio, S. (2015). Generating sentences from a continuous space. arXiv preprint arXiv:1511.06349.
Learning phrase representations using rnn encoder-decoder for statistical machine translation. K Cho, B Van Merriënboer, C Gulcehre, D Bahdanau, F Bougares, H Schwenk, Y Bengio, arXiv:1406.1078arXiv preprintCho, K., Van Merriënboer, B., Gulcehre, C., Bah- danau, D., Bougares, F., Schwenk, H., and Ben- gio, Y. (2014). Learning phrase representations using rnn encoder-decoder for statistical machine translation. arXiv preprint arXiv:1406.1078.
Style transfer in text: Exploration and evaluation. Z Fu, X Tan, N Peng, D Zhao, Yan , R , Thirty-Second AAAI Conference on Artificial Intelligence. Fu, Z., Tan, X., Peng, N., Zhao, D., and Yan, R. (2018). Style transfer in text: Exploration and evaluation. In Thirty-Second AAAI Conference on Artificial Intelli- gence.
Ups and downs: Modeling the visual evolution of fashion trends with one-class collaborative filtering. R He, J Mcauley, proceedings of the 25th international conference on world wide web. the 25th international conference on world wide webternational World Wide Web Conferences Steering CommitteeHe, R. and McAuley, J. (2016). Ups and downs: Modeling the visual evolution of fashion trends with one-class col- laborative filtering. In proceedings of the 25th interna- tional conference on world wide web, pages 507-517. In- ternational World Wide Web Conferences Steering Com- mittee.
Mining and summarizing customer reviews. M Hu, B Liu, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining. the tenth ACM SIGKDD international conference on Knowledge discovery and data miningACMHu, M. and Liu, B. (2004). Mining and summarizing customer reviews. In Proceedings of the tenth ACM SIGKDD international conference on Knowledge discov- ery and data mining, pages 168-177. ACM.
Toward controlled generation of text. Z Hu, Z Yang, X Liang, R Salakhutdinov, E P Xing, Proceedings of the 34th International Conference on Machine Learning. the 34th International Conference on Machine LearningJMLR. org70Hu, Z., Yang, Z., Liang, X., Salakhutdinov, R., and Xing, E. P. (2017). Toward controlled generation of text. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, pages 1587-1596. JMLR. org.
E Jang, S Gu, B Poole, arXiv:1611.01144Categorical reparameterization with gumbel-softmax. arXiv preprintJang, E., Gu, S., and Poole, B. (2016). Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144.
Unsupervised text style transfer via iterative matching and translation. Z Jin, D Jin, J Mueller, N Matthews, E Santus, Jin, Z., Jin, D., Mueller, J., Matthews, N., and Santus, E. (2019). Unsupervised text style transfer via iterative matching and translation.
Disentangled representation learning for text style transfer. V John, L Mou, H Bahuleyan, O Vechtomova, arXiv:1808.04339arXiv preprintJohn, V., Mou, L., Bahuleyan, H., and Vechtomova, O. (2018). Disentangled representation learning for text style transfer. arXiv preprint arXiv:1808.04339.
Y Kim, arXiv:1408.5882Convolutional neural networks for sentence classification. arXiv preprintKim, Y. (2014). Convolutional neural networks for sen- tence classification. arXiv preprint arXiv:1408.5882.
Adam: A method for stochastic optimization. D P Kingma, J Ba, Kingma, D. P. and Ba, J. (2014). Adam: A method for stochastic optimization.
D P Kingma, M Welling, arXiv:1312.6114Auto-encoding variational bayes. arXiv preprintKingma, D. P. and Welling, M. (2013). Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.
Content analysis: An introduction to its methodology. K Krippendorff, Sage publicationsKrippendorff, K. (2018). Content analysis: An introduc- tion to its methodology. Sage publications.
On information and sufficiency. The annals of mathematical statistics. S Kullback, R A Leibler, 22Kullback, S. and Leibler, R. A. (1951). On information and sufficiency. The annals of mathematical statistics, 22(1):79-86.
From word embeddings to document distances. M Kusner, Y Sun, N Kolkin, K Weinberger, International Conference on Machine Learning. Kusner, M., Sun, Y., Kolkin, N., and Weinberger, K. (2015). From word embeddings to document distances. In International Conference on Machine Learning, pages 957-966.
Multiple-attribute text rewriting. G Lample, S Subramanian, E Smith, L Denoyer, M Ranzato, Y.-L Boureau, International Conference on Learning Representations. Lample, G., Subramanian, S., Smith, E., Denoyer, L., Ran- zato, M., and Boureau, Y.-L. (2019). Multiple-attribute text rewriting. In International Conference on Learning Representations.
Delete, retrieve, generate: A simple approach to sentiment and style transfer. J Li, R Jia, H He, P Liang, arXiv:1804.06437arXiv preprintLi, J., Jia, R., He, H., and Liang, P. (2018). Delete, re- trieve, generate: A simple approach to sentiment and style transfer. arXiv preprint arXiv:1804.06437.
Adaptive gradient methods with dynamic bound of learning rate. L Luo, Y Xiong, Y Liu, X Sun, Proceedings of the 7th International Conference on Learning Representations. the 7th International Conference on Learning RepresentationsNew Orleans, LouisianaLuo, L., Xiong, Y., Liu, Y., and Sun, X. (2019). Adap- tive gradient methods with dynamic bound of learning rate. In Proceedings of the 7th International Conference on Learning Representations, New Orleans, Louisiana, May.
Aspect-based sentiment classification with attentive neural turing machines. Q Mao, J Li, S Wang, Y Zhang, H Peng, M He, Wang , L , Proceedings of the 28th International Joint Conference on Artificial Intelligence. the 28th International Joint Conference on Artificial IntelligenceAAAI PressMao, Q., Li, J., Wang, S., Zhang, Y., Peng, H., He, M., and Wang, L. (2019). Aspect-based sentiment classification with attentive neural turing machines. In Proceedings of the 28th International Joint Conference on Artificial In- telligence, pages 5139-5145. AAAI Press.
Evaluating style transfer for text. R Mir, B Felbo, N Obradovich, I Rahwan, Mir, R., Felbo, B., Obradovich, N., and Rahwan, I. (2019). Evaluating style transfer for text.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W.-J Zhu, Proceedings of the 40th annual meeting on association for computational linguistics. the 40th annual meeting on association for computational linguisticsAssociation for Computational LinguisticsPapineni, K., Roukos, S., Ward, T., and Zhu, W.-J. (2002). Bleu: a method for automatic evaluation of machine translation. In Proceedings of the 40th annual meeting on association for computational linguistics, pages 311- 318. Association for Computational Linguistics.
Glove: Global vectors for word representation. J Pennington, R Socher, C D Manning, Empirical Methods in Natural Language Processing (EMNLP). Pennington, J., Socher, R., and Manning, C. D. (2014). Glove: Global vectors for word representation. In Empirical Methods in Natural Language Processing (EMNLP), pages 1532-1543.
Style transfer through backtranslation. S Prabhumoye, Y Tsvetkov, R Salakhutdinov, A W Black, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational LinguisticsLong Papers)Prabhumoye, S., Tsvetkov, Y., Salakhutdinov, R., and Black, A. W. (2018). Style transfer through back- translation. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Vol- ume 1: Long Papers), pages 866-876, Melbourne, Aus- tralia, July. Association for Computational Linguistics.
C N D Santos, I Melnyk, I Padhi, arXiv:1805.07685Fighting offensive language on social media with unsupervised text style transfer. arXiv preprintSantos, C. N. d., Melnyk, I., and Padhi, I. (2018). Fighting offensive language on social media with unsupervised text style transfer. arXiv preprint arXiv:1805.07685.
Style transfer from non-parallel text by cross-alignment. T Shen, T Lei, R Barzilay, T Jaakkola, Advances in neural information processing systems. Shen, T., Lei, T., Barzilay, R., and Jaakkola, T. (2017). Style transfer from non-parallel text by cross-alignment. In Advances in neural information processing systems, pages 6830-6841.
End-to-end memory networks. S Sukhbaatar, A Szlam, J Weston, Fergus , R , Sukhbaatar, S., Szlam, A., Weston, J., and Fergus, R. (2015). End-to-end memory networks.
Policy gradient methods for reinforcement learning with function approximation. R S Sutton, D A Mcallester, S P Singh, Y Mansour, Advances in neural information processing systems. Sutton, R. S., McAllester, D. A., Singh, S. P., and Man- sour, Y. (2000). Policy gradient methods for reinforce- ment learning with function approximation. In Advances in neural information processing systems, pages 1057- 1063.
Recognizing contextual polarity in phrase-level sentiment analysis. T Wilson, J Wiebe, P Hoffmann, Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. Human Language Technology Conference and Conference on Empirical Methods in Natural Language ProcessingWilson, T., Wiebe, J., and Hoffmann, P. (2005). Recog- nizing contextual polarity in phrase-level sentiment anal- ysis. In Proceedings of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing.
mask and infill": Applying masked language model to sentiment transfer. X Wu, T Zhang, L Zang, J Han, S Hu, arXiv:1908.08039arXiv preprintWu, X., Zhang, T., Zang, L., Han, J., and Hu, S. (2019). " mask and infill": Applying masked language model to sentiment transfer. arXiv preprint arXiv:1908.08039.
Unsupervised text style transfer using language models as discriminators. Z Yang, Z Hu, C Dyer, E P Xing, T Berg-Kirkpatrick, Advances in Neural Information Processing Systems. Yang, Z., Hu, Z., Dyer, C., Xing, E. P., and Berg- Kirkpatrick, T. (2018). Unsupervised text style transfer using language models as discriminators. In Advances in Neural Information Processing Systems, pages 7287- 7298.
Seqgan: Sequence generative adversarial nets with policy gradient. L Yu, W Zhang, J Wang, Yu , Y , Thirty-First AAAI Conference on Artificial Intelligence. Yu, L., Zhang, W., Wang, J., and Yu, Y. (2017). Seqgan: Sequence generative adversarial nets with policy gradi- ent. In Thirty-First AAAI Conference on Artificial Intel- ligence.
Leveraging multi-grained sentiment lexicon information for neural sequence models. Y Zeng, Y Lan, Y Hao, C Li, Q Zheng, arXiv:1812.01527arXiv preprintZeng, Y., Lan, Y., Hao, Y., Li, C., and Zheng, Q. (2018). Leveraging multi-grained sentiment lexicon in- formation for neural sequence models. arXiv preprint arXiv:1812.01527.
Learning sentiment memories for sentiment modification without parallel data. Y Zhang, J Xu, P Yang, X Sun, arXiv:1808.07311arXiv preprintZhang, Y., Xu, J., Yang, P., and Sun, X. (2018a). Learning sentiment memories for sentiment modification without parallel data. arXiv preprint arXiv:1808.07311.
Text emotion distribution learning via multi-task convolutional neural network. Y Zhang, J Fu, D She, Y Zhang, S Wang, Yang , J , In IJCAI. Zhang, Y., Fu, J., She, D., Zhang, Y., Wang, S., and Yang, J. (2018b). Text emotion distribution learning via multi-task convolutional neural network. In IJCAI, pages 4595-4601. |
||
15,046,051 | Error-tagged Learner Corpus of Czech | The paper describes a learner corpus of Czech, currently under development. The corpus captures Czech as used by nonnative speakers. We discuss its structure, the layered annotation of errors and the annotation process. | [] | Error-tagged Learner Corpus of Czech
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2010. 2010
Jirka Hana
Charles University Prague
Czech Republic
Alexandr Rosen alexandr.rosen@ff.cuni.cz
Charles University
PragueCzech Republic
Svatava Škodová svatava.skodova@tul.cz
Technical University Liberec
Czech Republic
Barbora Štindlová barbora.stindlova@tul.cz
Technical University Liberec
Czech Republic
Error-tagged Learner Corpus of Czech
Proceedings of the Fourth Linguistic Annotation Workshop, ACL 2010
the Fourth Linguistic Annotation Workshop, ACL 2010Uppsala, Sweden; cAssociation for Computational LinguisticsJuly 2010. 2010
The paper describes a learner corpus of Czech, currently under development. The corpus captures Czech as used by nonnative speakers. We discuss its structure, the layered annotation of errors and the annotation process.
Introduction
Corpora consisting of texts produced by nonnative speakers are becoming an invaluable source of linguistic data, especially for foreign language educators. In addition to morphosyntactic tagging and lemmatisation, common in other corpora, learner corpora can be annotated by information relevant to the specific nonstandard language of the learners. Cases of deviant use can be identified, emended and assigned a tag specifying the type of the error, all of which helps to exploit the richness of linguistic data in the texts. However, annotation of this kind is a challenging tasks, even more so for a language such as Czech, with its rich inflection, derivation, agreement, and largely information-structure-driven constituent order. A typical learner of Czech makes errors across all linguistic levels, often targeting the same form several times.
The proposed annotation scheme is an attempt to respond to the requirements of annotating a deviant text in such a language, striking a compromise between the limitations of the annotation process and the demands of the corpus user. The three-level format allows for successive emendations, involving multiple forms in discontinuous sequences. In many cases, the error type follows from the comparison of the faulty and corrected forms and is assigned automatically, sometimes using information present in morphosyntac-tic tags, assigned by a tagger. In more complex cases, the scheme allows for representing relations making phenomena such as the violation of agreement rules explicit.
After an overview of issues related to learner corpora in §2 and a brief introduction to the project of a learner corpus of Czech in §3 we present the concept of our annotation scheme in §4, followed by a description of the annotation process in §5.
Learner corpora
A learner corpus, also called interlanguage or L2 corpus, is a computerised textual database of language as produced by second language (L2) learners (Leech, 1998). Such a database is a very powerful resource in research of second language acquisition. It can be used to optimise the L2 learning process, to assist authors of textbooks and dictionaries, and to tailor them to learners with a particular native language (L1).
More generally, a learner corpus -like other corpora -serves as a repository of authentic data about a language (Granger, 1998). In the domain of L2 acquisition and teaching of foreign languages, the language of the learners is called interlanguage (Selinker, 1983). 1 An interlanguage includes both correct and deviant forms. The possibility to examine learners' errors on the background of the correct language is the most important aspect of learner corpora (Granger, 1998).
Investigating the interlanguage is easier when the deviant forms are annotated at least by their correct counterparts, or, even better, by tags making the nature of the error explicit. Although learner corpora tagged this way exist, the two decades of research in this field have shown that designing a tagset for the annotation of errors is a task highly sensitive to the intended use of the corpus and the results are not easily transferable from one language to another.
Learner corpora can be classified according to several criteria:
• Target language (TL): Most learner corpora cover the language of learners of English as a second or foreign language (ESL or EFL). The number of learner corpora for other languages is smaller but increasing.
• Medium: Learner corpora can capture written or spoken texts, the latter much harder to compile, thus less common.
• L1: The data can come from learners with the same L1 or with various L1s.
• Proficiency in TL: Some corpora gather texts of students at the same level, other include texts of speakers at various levels. Most corpora focus on advanced students.
• Annotation: Many learner corpora contain only raw data, possibly with emendations, without linguistic annotation; some include partof-speech (POS) tagging. Several include error tagging. Despite the time-consuming manual effort involved, the number of error-tagged learner corpora is growing.
Error-tagged corpora use the following taxonomies to classify the type of error:
• Taxonomies marking the source of error: The level of granularity ranges from broad categories (morphology, lexis, syntax) to more specific ones (auxiliary, passive, etc.).
• Taxonomies based on formal types of alternation of the source text: omission, addition, misformation, mis-ordering.
• Hierarchical taxonomies based on a combination of various aspects: error domain (formal, grammatical, lexical, style errors), error category (agglutination, diacritics, derivation inflection, auxiliaries, gender, mode, etc.), word category (POS).
• Without error taxonomies, using only correction as the implicit explanation for an error.
In Table 1 we present a brief summary of existing learner corpora tagged by POS and/or error types, including the size of the corpus (in millions of words or Chinese characters), the mother tongue of the learners, or -in case of learners with different linguistic backgrounds -the number of mother tongues (L1), the TL and the learners' level of proficiency in TL. For an extensive overview see, for example (Pravec, 2002;Nesselhauf, 2004;Xiao, 2008
A learner corpus of Czech
In many ways, building a learner corpus of Czech as a second/foreign language is a unique enterprise. To the best of our knowledge, the CzeSL corpus (Czech as a Second/Foreign Language) is the first learner corpus ever built for a highly inflectional language, and one of the very few using multi-layer annotation (together with FALKO -see Table 1). The corpus consists of 4 subcorpora according to the learners' L1:
• The Russian subcorpus represents an interlanguage of learners with a Slavic L1.
• The Vietnamese subcorpus represents a numerous minority of learners with very few points of contact between L1 and Czech.
• The Romani subcorpus represents a linguistic minority with very specific traits in the Czech cultural context.
• The "remnant" subcorpus covers texts from speakers of various L1s.
The whole extent of CzeSL will be two million words (in 2012). Each subcorpus is again divided into two subcorpora of written and spoken texts; 2 this division guarantees the representative character of the corpus data. The corpus is based on texts covering all language levels according to the Common European Framework of Reference for Languages, from real beginners (A1 level) to advanced learners (level B2 and higher). The texts are elicited during various situations in classes; they are not restricted to parts of written examination. This spectrum of various levels and situations is unique in the context of other learner corpora.
Each text is equipped with the necessary background information, including sociological data about the learner (age, gender, L1, country, language level, other languages, etc.) and the situation (test, homework, school work without the possibility to use a dictionary, etc.).
Annotation scheme
The feasible and the desirable
The error tagging system for CzeSL is designed to meet the requirements of Czech as an inflectional language. Therefore, the scheme is:
• Detailed but manageable for the annotators.
• Informative -the annotation is appropriate to Czech as a highly inflectional language.
• Open to future extensions -it allows for more detailed taxonomy to be added in the future.
The annotators are no experts in Czech as a foreign language or in 2L learning and acquisition, and they are unaware of possible interferences between languages the learner knows. Thus they may fail to recognise an interferential error. A sentence such as Tokio je pěkný hrad 'Tokio is a nice castle' is grammatically correct, but its author, a native speaker of Russian, was misled by 'false friends' and assumed hrad 'castle' as the Czech equivalent of Russian gorod 'town, city'. 3 Similarly in Je tam hodně sklepů 'There are many cellars.' The formally correct sentence may strike the reader as implausible in the context, but it is impossible to identify and emend the error without the knowledge that sklep in Russian means 'grave', not 'cellar' (= sklep in Czech).
For some types of errors, the problem is to define the limits of interpretation. The clause kdyby citila na tebe zlobna is grammatically incorrect, yet roughly understandable as 'if she felt angry at you'. In such cases the task of the annotator is interpretation rather than correction. The clause can be rewritten as kdyby se na tebe cítila rozzlobená 'if she felt angry at you', or kdyby se na tebe zlobila 'if she were angry at you'; the former being less natural but closer to the original, unlike the latter. It is difficult to provide clear guidelines.
Errors in word order represent another specific type. Czech constituent order reflects information structure and it is sometimes difficult to decide (even in a context) whether an error is present. The sentence Rádio je taky na skříni 'A radio is also on the wardrobe' suggests that there are at least two radios in the room, although the more likely interpretation is that among other things, there is also a radio, which happens to sit on the wardrobe. Only the latter interpretation would require a different word order: Taky je na skříni rádio. Similarly difficult may be decisions about errors labelled as lexical and modality.
The phenomenon of Czech diglossia is reflected in the problem of annotating non-standard language, usually individual forms with colloquial morphological endings. The learners may not be aware of their status and/or an appropriate context for their use, and the present solution assumes that colloquial Czech is emended under the rationale that the author expects the register of his text to be perceived as unmarked.
On the other hand, there is the primary goal of the corpus: to serve the needs of the corpus users. The resulting error typology is a compromise between the limitations of the annotation process and the demands of research into learner corpora.
The corpus can be used for comparisons among learner varieties of Czech, studied as national interlanguages (Russian, Vietnamese, Romani etc.) using a matrix of statistic deviations. Similarly interesting are the heterogeneous languages of learners on different stages of acquisition. From the pedagogical point of view, corpus-based analyses have led to a new inductive methodology of data-driven learning, based on the usage of concordances in exercises or to support students' independent learning activities.
The framework
Annotated learner corpora sometimes use data formats and tools developed originally for annotating speech. Such environments allow for an arbitrary segmentation of the input and multilevel annotation of segments (Schmidt, 2009). Typically, the annotator edits a table with columns corresponding to words and rows to levels of annotation. A cell can be split or more cells merged to allow for annotating smaller or larger segments. This way, phenomena such as agreement or word order can be emended and tagged (Lüdeling et al., 2005).
However, in the tabular format vertical correspondences between the original word form and its emended equivalents or annotations at other levels may be lost. It is difficult to keep track of links between forms merged into a single cell, spanning multiple columns, and the annotations of a form at other levels (rows). This may be a problem for successive emendations involving a single form, starting from a typo up to an ungrammatical word order, but also for morphosyntactic tags assigned to forms, whenever a form is involved in a multiword annotation and its equivalent or tag leaves the column of the original form.
While in the tabular format the correspondences between elements at various levels are captured only implicitly, in our annotation scheme these correspondences are explicitly encoded. Our format supports the option of preserving correspondences across levels, both between individual word forms and their annotations, while allowing for arbitrary joining and splitting of any number of non-contiguous segments. The annotation levels are represented as a graph consisting of a set of parallel paths (annotation levels) with links between them. Nodes along the paths always stand for word tokens, correct or incorrect, and in a sentence with nothing to correct the corresponding word tokens in every pair of neighbouring paths are linked 1:1. Additionally, the nodes can be assigned morphosyntactic tags, syntactic functions or any other word-specific information. Whenever a word form is emended, the type of error can be specified as a label of the link connecting the incorrect form at level S i with its emended form at level S i+1 . In general, these labelled relations can link an arbitrary number of elements at one level with an arbitrary number of elements at a neighbouring level. The elements at one level participating in this relation need not form a contiguous sequence. Multiple words at any level are thus identified as a single segment, which is related to a segment at a neighbouring level, while any of the participating word forms can retain their 1:1 links with their counterparts at other levels. This is useful for splitting and joining word forms, for changing word order, and for any other corrections involving multiple words. Nodes can also be added or omitted at any level to correct missing or odd punctuation signs or syntactic constituents. See Figure 1 below for an example of this multi-level annotation scheme.
The option of relating multiple nodes as single segments across levels could also be used for treating morphosyntactic errors in concord and government. However, in this case there is typically one correct form involved, e.g., the subject in subject-predicate agreement, the noun in adjective-noun agreement, the verb assigning case to a complement, the antecedent in pronominal reference. Rather than treating both the correct and the incorrect form as equals in a 2:2 relation between the levels, the incorrect form is emended using a 1:1 link with an option to refer to the correct form. Such references link pairs of forms at neighbouring levels rather than the forms themselves to enable possible references from a multiword unit (or) to another multi-word unit. See Figure 1 below again, where such references are represented by arrows originating in labels val.
A single error may result in multiple incorrect forms as shown in (1). The adjective velký 'big-NOM-SG-M(ASC)' correctly agrees with the noun pes 'dog-NOM-SG-MASC'. However, the case of the noun is incorrect -it should be in accusative rather than nominative. When the noun's case is corrected, the case of the adjective has to be corrected as well. Then multiple references are made: to the verb as the case assigner for the noun, and to the noun as the source of agreement for the adjective.
(1) a. Annotation of learners' texts is often far from straightforward, and alternative interpretations are available even in a broader context. The annotation format supports alternatives, but for the time being the annotation tool does not support local disjunctions. This may be a problem if the annotator has multiple target hypotheses in mind.
Three levels of annotation
A multi-level annotation scheme calls for some justification, and once such a scheme is adopted, the question of the number of levels follows.
After a careful examination of alternatives, we have arrived at a two-stage annotation design, based on three levels. A flat, single-stage, twolevel annotation scheme would be appropriate if we were interested only in the original text and in the annotation at some specific level (fully emended sentences, or some intermediate stage, such as emended word forms). The flat design could be used even if we insisted on registering some intermediate stages of the passage from the original to a fully emended text, and decided to store such information with the word-form nodes. However, such information might get lost in the case of significant changes involving deletions or additions (e.g., in Czech as a pro-drop language, the annotator may decide that a misspelled personal pronoun in the subject position should be deleted and the information about the spelling error would lost). The decision to use a multi-level design was mainly due to our interest in annotating errors in single forms as well as those spanning (potentially discontinuous) strings of words.
Once we have a scheme of multiple levels available, we can provide the levels with theoretical significance and assign a linguistic interpretation to each of them. In a world of unlimited resources of annotators' time and experience, this would be the optimal solution. The first annotation level would be concerned only with errors in graphemics, followed by levels dedicated to morphemics, morphosyntax, syntax, lexical phenomena, semantics and pragmatics. More realistically, there could be a level for errors in graphemics and morphemics, another for errors in morphosyntax (agreement, government) and one more for everything else, including word order and phraseology.
Our solution is a compromise between corpus users' expected demands and limitations due to the annotators' time and experience. The annotator has a choice of two levels of annotation, and the distinction, based to a large extent on formal criteria, is still linguistically relevant.
At the level of transcribed input (Level 0), the nodes represent the original strings of graphemes. At the level of orthographical and morphological emendation (Level 1), only individual forms are treated. The result is a string consisting of cor-rect Czech forms, even though the sentence may not be correct as a whole. The rule of "correct forms only" has a few exceptions: a faulty form is retained if no correct form could be used in the context or if the annotator cannot decipher the author's intention. On the other hand, a correct form may be replaced by another correct form if the author clearly misspelled the latter, creating an unintended homograph with another form. All other types of errors are emended at Level 2.
Captured errors
A typical learner of Czech makes errors all along the hierarchy of theoretically motivated linguistic levels, starting from the level of graphemics up to the level of pragmatics. Our goal is to emend the input conservatively, modifying incorrect and inappropriate forms and expressions to arrive at a coherent and well-formed result, without any ambition to produce a stylistically optimal solution. Emendation is possible only when the input is comprehensible. In cases where the input or its part is not comprehensible, it is left with a partial or even no annotation.
The taxonomy of errors is rather coarse-grained, a more detailed classification is previewed for a later stage and a smaller corpus sample. It follows the three-level distinction and is based on criteria as straightforward as possible. Whenever the error type can be determined from the way the error is emended, the type is supplied automatically by a post-processing module, together with morphosyntactic tags and lemmas for the correct or emended forms (see § 5.3).
Errors in individual word forms, treated at Level 1, include misspellings (also diacritics and capitalisation), misplaced word boundaries, missing or misused punctuation, but also errors in inflectional and derivational morphology and unknown stems. These types of errors are emended manually, but the annotator is not expected label them by their type -the type of most errors at Level 1 is identified automatically. The only exception where the error type must be assigned manually is when an unknown stem or derivation affix is used.
Whenever the lexeme (its stem and/or suffix) is unknown and can be replaced by a suitable form, it is emended at Level 1. If possible, the form should fit the syntactic context. If no suitable form can be found, the form is retained and marked as unknown. When the form exists, but is not appro-priate in context, it is emended at Level 2 -the reason may be the violation of a syntactic rule or semantic incompatibility of the lexeme. Table 2 gives a list of error types emended at Level 1. Some types actually include subtypes: words can be incorrectly split or joined, punctuation, diacritics or character(s) can be missing, superfluous, misplaced or of a wrong kind. The Links column gives the maximum number of positions at Level 0, followed by the maximum number of position at Level 1 that are related by links for this type of error. The Id column says if the error type is determined automatically or has to be specified manually. Emendations at Level 2 concern errors in agreement, valency and pronominal reference, negative concord, the choice of a lexical item or idiom, and in word order. For the agreement, valency and pronominal reference cases, there is typically an incorrect form, which reflects some properties (morphological categories, valency requirements) of a correct form (the agreement source, syntactic head, antecedent). Table 3 gives a list of error types emended at Level 2. The Ref column gives the number of pointers linking the incorrect form with the correct "source". The annotation scheme is illustrated in Figure 1, using an authentic sentence, split in two halves for space reasons. There are three parallel strings of word forms, including punctuation signs, representing the three levels, with links for corresponding forms. Any emendation is labelled with an error type. 4 The first line is Level 0, imported from the transcribed original, with English glosses below (forms marked by asterisks are incorrect in any context, but they may be comprehensible -as is the case with all such forms in this example). Correct words are linked directly with their copies at Level 1, for emended words the link is labelled with an error type. In the first half of the sentence, unk for unknown form, dia for an error in diacritics, cap for an error in capitalisation. According to the rules of Czech orthography, the negative particle ne is joined with the verb using an intermediate node bnd. A missing comma is introduced at Level 1, labelled as a punctuation error. All the error labels above can be specified automatically in the post-processing step.
Error type Links Id
Error type Links Ref Id
Staying with the first half of the sentence, most forms at Level 1 are linked directly with their equivalents at Level 2 without emendations. The reflexive particle se is misplaced as a second position clitic, and is put into the proper position using the link labelled wo for a word-order error. 5 The pronoun ona -'she' in the nominative case -is governed by the form líbit se, and should bear the dative case: jí. The arrow to líbit makes the reason for this emendation explicit. The result could still be improved by positioning Praha after the clitics and before the finite verb nebude, resulting in a word order more in line with the underlying information structure of the sentence, but our policy is to refrain from more subtle phenomena and produce a grammatical rather than a perfect result.
In the second half of the sentence, there is only one Level 1 error in diacritics, but quite a few errors at Level 2. Proto 'therefore' is changed to protože 'because' -a lexical emendation. The main issue are the two finite verbs bylo and vadí. The most likely intention of the author is best expressed by the conditional mood. The two noncontiguous forms are replaced by the conditional 4 The labels for error types used here are simplified for reasons of space and mnemonics. 5 In word-order errors it may be difficult to identify a specific word form violating a rule. The annotation scheme allows for both se and jí to be blamed. However, here we prefer the simpler option and identify just one, more prominent word form. Similarly with mi below. auxiliary and the content verb participle in one step using a 2:2 relation. The intermediate node is labelled by cplx for complex verb forms. The prepositional phrase pro mně 'for me' is another complex issue. Its proper form is pro mě (homonymous with pro mně, but with 'me' bearing accusative instead of dative), or pro mne. The accusative case is required by the preposition pro. However, the head verb requires that this complement bears bare dative -mi. Additionally, this form is a second position clitic, following the conditional auxiliary (also a clitic) in the clitic cluster. The change from PP to the bare dative pronoun and the reordering are both properly represented, including the pointer to the head verb. What is missing is an explicit annotation of the faulty case of the prepositional complement, which is lost during the Level 1 -Level 2 transition, the price for a simpler annotation scheme with fewer levels. It might be possible to amend the PP at Level 1, but it would go against the rule that only forms wrong in isolation are emended at Level 1.
Bojal jsem se že ona se ne bude libit prahu , *feared AUX RFL that she RFL not will *like prague ,
Data Format
To encode the layered annotation described above, we have developed an annotation schema in the Prague Markup Language (PML). 6 PML is a 6 http://ufal.mff.cuni.cz/jazz/pml/ index_en.html <?xml version="1.0" encoding="UTF-8"?> <adata xmlns="http://utkl.cuni.cz/czesl/"> <head> <schema href="adata_schema.xml" /> <references> <reffile id="w" name="wdata" href="r049.w.xml" /> </references> </head> <doc id="a-r049-d1" lowerdoc.rf="w#w-r049-d1"> ... <para id="a-r049-d1p2" lowerpara.rf="w#w-r049-d1p2"> ... <s id="a-r049-d1p2s5"> <w id="a-r049-d1p2w50"> <token>Bál</token> </w> <w id="a-r049-d1p2w51"> <token>jsem</token> </w> <w id="a-r049-d1p2w52"> <token>se</token> </w> ... </s> ... <edge id="a-r049-d1p2e54"> <from>w#w-r049-d1p2w46</from> <to>a-r049-d1p2w50</to> <error> <tag>unk</tag> </error> </edge> <edge id="a-r049-d1p2e55"> <from>w#w-r049-d1p2w47</from> <to>a-r049-d1p2w51</to> </edge> ... </para> ... </doc> </adata> generic XML-based data format, designed for the representation of rich linguistic annotation organised into levels. In our schema, each of the higher levels contains information about words on that level, about the corrected errors and about relations to the tokens on the lower levels. Level 0 does not contain any relations, only links to the neighbouring Level 1. In Figure 2, we show a portion (first three words and first two relations) of the Level 1 of the sample sentence encoded in our annotation schema.
Annotation process
The whole annotation process proceeds as follows:
• A handwritten document is transcribed into html using off-the-shelf tools (e.g. Open Office Writer or Microsoft Word).
• The information in the html document is used to generate Level 0 and a default Level 1 encoded in the PML format.
• An annotator manually corrects the document and provides some information about errors using our annotation tool.
• Error information that can be inferred automatically is added. Figure 3: Sample sentence in the annotation tool.
Transcription
The original documents are hand-written, usually the only available option, given that their most common source are language courses and exams. The avoidance of an electronic format is also due to the concern about the use of automatic textediting tools by the students, which may significantly distort the authentic interlanguage. Therefore, the texts must be transcribed, which is very time consuming. While we strive to capture only the information present in the original hand-written text, often some interpretation is unavoidable. For example, the transcribers have to take into account specifics of hand-writing of particular groups of students and even of each individual student (the same glyph may be interpreted as l in the hand-writing of one student, e of another, and a of yet another). When a text allows multiple interpretation, the transcribers may provide all variants. For example, the case of initial letters or word boundaries are often unclear. Obviously, parts of some texts may be completely illegible and are marked as such.
Also captured are corrections made by the student (insertions, deletions, etc.), useful for investi-gating the process of language acquisition.
The transcripts are not spell-checked automatically. In a highly inflectional language, deviations in spelling very often do not only reflect wrong graphemics, but indicate an error in morphology.
Annotation
The manual portion of annotation is supported by an annotation tool we have developed. The annotator corrects the text on appropriate levels, modifies relations between elements (by default all relations are 1:1) and annotates relations with error tags as needed. The context of the annotated text is shown both as a transcribed html document and as a scan of the original document. The tool is written in Java on top of the Netbeans platform. 7 Figure 3 shows the annotation of the sample sentence as displayed by the tool.
Postprocessing
Manual annotation is followed by automatic postprocessing, providing the corpus with additional information:
• Level 1: lemma, POS and morphological categories (this information can be ambiguous)
• Level 2: lemma, POS and morphological categories (disambiguated)
• Level 1: type of error (by comparing the original and corrected strings), with the exception of lexical errors that involve lemma changes (e.g. *kadeřnička -kadeřnice 'hair-dresser')
• Level 2: type of morphosyntactic errors caused by agreement or valency error (by comparing morphosyntactic tags at Level 1 and 2)
• Formal error description: missing/extra expression, erroneous expression, wrong order
• In the future, we plan to automatically tag errors in verb prefixes, inflectional endings, spelling, palatalisation, metathesis, etc.
Conclusion
Error annotation is a very resource-intensive task, but the return on investment is potentially enormous. Depending on the annotation scheme, the corpus user has access to detailed error statistics, which is difficult to obtain otherwise. An errortagged corpus is an invaluable tool to obtain a reliable picture of the learners' interlanguage and to adapt teaching methods and learning materials by identifying the most frequent error categories in accordance with the learner's proficiency level or L1 background. We are expecting plentiful feedback from the error annotation process, which is just starting. As the goal of a sizable corpus requires a realistic setup, we plan to experiment with more and less detailed sets of error types, measuring the time and inter-annotator agreement. A substantially more elaborate classification of errors is previewed for a limited subset of the corpus.
At the same time, the feedback of the annotators will translate into the ongoing tuning of the annotation guidelines, represented by a comprehensive error-tagging manual. We hope in progress in dealing with thorny issues such as the uncertainty about the author's intended meaning, the inference errors, the proper amount of interference with the original, or the occurrence of colloquial language. In all of this, we need to make sure that annotators handle similar phenomena in the same way.
However, the real test of the corpus will come with its usage. We are optimistic -some of the future users are a crucial part of our team and their needs and ideas are the driving force of the project.
Acknowledgements
We wish to thank other members of the project team, namely Milena Hnátková, Tomáš Jelínek, Vladimír Petkevič, and Hana Skoumalová for their numerous stimulating ideas, acute insight and important feedback. We are especially grateful to Karel Šebesta, for all of the above and for initiating and guiding this enterprise.
The work described in this paper is funded by the European Social Fund and the government of the Czech Republic within the operational programme 'Education for Competitiveness' as a part of the project 'Innovation in Education in the Field of Czech as a Second Language' (project no. CZ.1.07/2.2.00/07.0259).
Figure 1 :
1Annotation of a sample sentence
Figure 2 :
2Portion of the Level 1 of the sample sentence encoded in the PML data format.
).Size
L1
TL
TL proficiency
ICLE -Internat'l Corpus of Learner English
3M
21
English
advanced
CLC -Cambridge Learner Corpus
30M
130
English
all levels
PELCRA -Polish Learner English Corpus
0.5M
Polish
English
all levels
USE -Uppsala Student English Corpus
1.2M Swedish English
advanced
HKUST -Hong Kong University of Science
and Technology Corpus of Learner English
25M
Chinese
English
advanced
CLEC -Chinese Learner English Corpus
1M
Chinese
English
5 levels
JEFLL -Japanese EFL Learner Corpus
0.7M Japanese English
advanced
FALKO -Fehlerannotiertes Lernerkorpus
1.2M
various
German
advanced
FRIDA -French Interlanguage Database
0.2M
various
French
intermediate
CIC -Chinese Interlanguage Corpus
2M
96
Chinese
intermediate
Table 1 :
1Some currently available learner corpora
Table 2 :
2Types of errors at Level 1
Table 3 :
3Types of errors at Level 2
Interlanguage is distinguished by its highly individual and dynamic nature. It is subject to constant changes as the learner progresses through successive stages of acquiring more competence, and can be seen as an individual and dynamic continuum between one's native and target languages.
Transcripts of the spoken parts will be integrated with the rest of the corpus at a later stage of the project.3 All examples are authentic.
http://platform.netbeans.org/
Learner English on Computer. Sylviane GrangerAddison Wesley LongmanLondon and New YorkSylviane Granger, editor. 1998. Learner English on Computer. Addison Wesley Longman, London and New York.
Preface. Geoffrey Leech, Learner English on Computer. Granger SylvianeGeoffrey Leech. 1998. Preface. In Granger Sylviane, editor, Learner English on Computer, pages xiv-xx.
. Addison Wesley Longman, London and New YorkAddison Wesley Longman, London and New York.
Multi-level error annotation in learner corpora. Anke Lüdeling, Maik Walter, Emil Kroymann, Peter Adolphs, Proceedings of Corpus Linguistics 2005. Corpus Linguistics 2005BirminghamAnke Lüdeling, Maik Walter, Emil Kroymann, and Pe- ter Adolphs. 2005. Multi-level error annotation in learner corpora. In Proceedings of Corpus Linguis- tics 2005, Birmingham.
Learner corpora and their potential for language teaching. Nadja Nesselhauf, Studies in corpus linguistics. John McHardy Sinclair, editorBenjamins, Amsterdam/PhiladelphiaHow to use corpora in language teachingNadja Nesselhauf. 2004. Learner corpora and their po- tential for language teaching. In John McHardy Sin- clair, editor, How to use corpora in language teach- ing, Studies in corpus linguistics, pages 125-152. Benjamins, Amsterdam/Philadelphia.
Survery of learner corpora. Norma A Pravec, ICAME Journal. 26Norma A. Pravec. 2002. Survery of learner corpora. ICAME Journal, 26:81-114.
Creating and working with spoken language corpora in EXMARaLDA. Thomas Schmidt, LULCL II: Lesser Used Languages & Computer Linguistics II. Thomas Schmidt. 2009. Creating and working with spoken language corpora in EXMARaLDA. In LULCL II: Lesser Used Languages & Computer Linguistics II, pages 151-164.
Interlanguage. Larry Selinker, Second Language Learning: Contrastive analysis, error analysis, and related aspects. Betty W. Robinett and Jacquelyn SchachterAnn Arbor, MIMichigan PressThe University ofLarry Selinker. 1983. Interlanguage. In Betty W. Robinett and Jacquelyn Schachter, editors, Second Language Learning: Contrastive analysis, error analysis, and related aspects, pages 173-196. The University of Michigan Press, Ann Arbor, MI.
editors, Corpus Linguistics. An International Handbook, volume 1 of Handbooks of Linguistics and Communication Science. Richard Xiao, HSK] 29.1Anke Lüdeling and Merja Kytö. Berlin and New YorkMouton de GruyterWell-known and influential corporaRichard Xiao. 2008. Well-known and influential corpora. In Anke Lüdeling and Merja Kytö, ed- itors, Corpus Linguistics. An International Hand- book, volume 1 of Handbooks of Linguistics and Communication Science [HSK] 29.1, pages 383- 457. Mouton de Gruyter, Berlin and New York. |
18,981,690 | In-tool Learning for Selective Manual Annotation in Large Corpora | We present a novel approach to the selective annotation of large corpora through the use of machine learning. Linguistic search engines used to locate potential instances of an infrequent phenomenon do not support ranking the search results. This favors the use of high-precision queries that return only a few results over broader queries that have a higher recall. Our approach introduces a classifier used to rank the search results and thus helping the annotator focus on those results with the highest potential of being an instance of the phenomenon in question, even in low-precision queries. The classifier is trained in an in-tool fashion, except for preprocessing relying only on the manual annotations done by the users in the querying tool itself. To implement this approach, we build upon CSniper 1 , a web-based multi-user search and annotation tool. | [
7734059,
11163854,
13138682,
933983,
5556991
] | In-tool Learning for Selective Manual Annotation in Large Corpora
July 26-31, 2015. 2015
Erik-Lân Do
Department of Computer Science
Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Dinh †
Department of Computer Science
Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Richard Eckart De Castilho
Department of Computer Science
Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
Iryna Gurevych
Department of Computer Science
Ubiquitous Knowledge Processing Lab (UKP-TUDA
Technische Universität Darmstadt
In-tool Learning for Selective Manual Annotation in Large Corpora
ACL and AFNLP
ACL-IJCNLP 2015 System DemonstrationsBeijing, ChinaJuly 26-31, 2015. 2015‡ Ubiquitous Knowledge Processing Lab (UKP-DIPF) German Institute for Educational Research and Educational Information
We present a novel approach to the selective annotation of large corpora through the use of machine learning. Linguistic search engines used to locate potential instances of an infrequent phenomenon do not support ranking the search results. This favors the use of high-precision queries that return only a few results over broader queries that have a higher recall. Our approach introduces a classifier used to rank the search results and thus helping the annotator focus on those results with the highest potential of being an instance of the phenomenon in question, even in low-precision queries. The classifier is trained in an in-tool fashion, except for preprocessing relying only on the manual annotations done by the users in the querying tool itself. To implement this approach, we build upon CSniper 1 , a web-based multi-user search and annotation tool.
Introduction
With the rapidly growing body of digitally available language data, it becomes possible to investigate phenomena of the language system that manifest themselves infrequently in corpus data, e.g. non-canonical constructions. To pinpoint occurrences of such phenomena and to annotate them requires a new kind of annotation tool, since manual, sequential annotation is not feasible anymore for large amounts of texts.
An annotation-by-query approach to identify such phenomena in large corpora is implemented 1 https://dkpro.github.io/dkpro-csniper in the recently published open-source tool CSniper (Eckart de Castilho et al., 2012).
To enable a selective manual annotation process, a linguistic search engine is used, allowing the creation of queries which single out potential instances of the phenomenon in question. Those potential instances are then displayed to the user, who annotates each one as being an instance of the phenomenon or not. This process of searching and annotating can be performed by multiple users concurrently; the annotations are stored for each user separately. In a subsequent evaluation step, a user can review the annotations of all users, e.g. to discard a query if it yields unsatisfying results. Finally, the annotations of multiple users can be merged into a gold standard. This approach relieves the annotator from having to read through the corpus from the beginning to the end to look for instances of a phenomenon. However, the search may yield many results that may superficially appear to be an instance of the desired phenomenon, but due to ambiguities or due to a broadly defined query only a small subset may be actual instances. This still leaves the annotator with the tedious task of clicking through the search results to mark the true instances.
To reduce the time and effort required, we present an extension of the annotation-by-query approach (Figure 1) that introduces a ranking of the query results (Section 2) by means of machine learning; we order the results by confidence of the used classifier.
To obtain a model for the classifier, we employ an in-tool learning approach, where we learn from the annotations that are made by users in the tool itself. This makes our ranking approach useful for highly specific tasks, since no pre-trained models are needed.
Finally we demonstrate the viability of our concept by the example task of finding non-canonical constructions in Section 3.
Ranking linguistic query results
Our approach employs machine learning to facilitate -but not to completely replace -the manual annotation of query results. A query expresses the intention of the user to find a specific linguistic phenomenon (information need). An information retrieval search engine would provide the user with a list of results that are ranked according to their relevance, fulfilling the information need. However, linguistic search engines such as CQP (Evert and Hardie, 2011) -which is used by CSniper -are basically pattern-matching engines, operating on different lexical or morphosyntactic features like part-of-speech (POS) tags and lemmata and do not have a concept of relevance. Thus, if the query provided by the user overgeneralizes, relevant results are hidden among many irrelevant ones, ultimately failing to satisfy the user's information need.
To tackle this problem, we use the annotations already created by users on the search results to train a classifier. Unannotated query results are then fed to the classifier whose output values are then used as relevance ratings by which the results are ranked. The classifier producing the ranking can be invoked by the user at any time; it can be configured in certain characteristics, e.g. the annotations of which users should be used as training data, or how many of the selected users have to agree on an annotation for it to be included.
Workflow and ranking process in CSniper
Currently, we train the classifier on features derived from the constituency parse tree, which makes it useful for tasks such as locating sen-tences containing infrequent ambiguous grammatical constructions (cf. Section 3). Since parsing the query results is too time-intensive to be done during runtime, we parsed the corpora in advance and stored the parse trees in a database. To train the classifier, we employed SVM-light-tk (Moschitti, 2006;Joachims, 1999), a support vector machine implementation which uses a tree kernel to integrate all sub-trees of the parse tree as features. Consider the following typical scenario incorporating the ranking: A user constructs a query based on various features, such as POS tags or lemmata, which are used to search for matching sentences, e.g.
"It" [lemma="be"] [pos="AT0"]? [pos="NN.*"] 2
The result is a list of sentences presented in a keywords-in-context (KWIC) view, along with an annotation field ( Figure 2).
Then the user starts to annotate these sentences as Correct or Wrong, depending whether they truly represent instances of the phenomenon in question. Clicking on the Rank results button ( Figure 2) invokes our ranking process: The SVMlight-tk classifier is trained using the parse trees of the sentences which the user previously annotated. The resulting model is then used to classify the remaining sentences in the query results. We rank the sentences according to the output value of the decision function of the classifier (which we interpret as a relevance/confidence rating) and transiently label a sentence as either (Correct) (output value > 0) or (Wrong) (output value ≤ 0). The results in the KWIC view are then reordered according to the rank, showing the highest-ranking result first. Repeatedly annotating those highest-ranking results and re-ranking allows for quickly annotating instances of the phenomenon, while also improving the classifier accuracy at the same time.
Find mode
After annotating instances based on simple queries and ML-supported ranked queries, we considered the natural next step to be searching automatically for the phenomenon in question utilizing machine learning, using arbitrary sentences from the corpus as input for the classifier instead of only the results returned by a query. Such an automatic search could address two concerns: 1) it removes the need for the user to design new queries, allowing users less experienced in the query language to annotate more effectively side-by-side with advanced users; 2) it could optimally generalize over all the queries that users have already made and potentially locate instances that had not been found by individual high-precision queries.
To support this, we implemented the Find mode, to locate instances of a phenomenon while abstracting from the queries. In this mode, the SVM is first trained from all previously (manually) labeled instances for a given phenomenon, without taking the queries into account that were used to find those instances. Then the corpus is partitioned into smaller parts containing a predefined amount of sentences (we used 500). One of these partitions is chosen at random, and the sentences therein are ranked using the SVM. This step is repeated, until a previously defined number of sentences have been classified as Correct. Those sentences are then shown to the user, who now can either confirm a sentence as containing the phenomenon or label it Wrong otherwise.
Related work
Existing annotation tools include automation functionality for annotation tasks, ranging from rule-based tagging to more complex, machinelearning-based approaches.
Such functionalities can be found in the annotation software WordFreak (Morton and LaCivita, 2003), where a plug-in architecture allows for a variety of different taggers and classifiers to be integrated, for example part-of-speech taggers or coreference resolution engines. Those require pretrained models, which limits the applicability of the automation capabilities of WordFreak to tasks for which such models are actually available. In addition to assigning annotations a single label, WordFreak allows plugins to rank labels for each annotation based on the confidence of the used classifier. Note that this is different to our ranking approach, where we instead perform a ranking of the search results which shall be annotated.
Another tool incorporating machine learning is WebAnno (Yimam et al., 2014), which implements features such as custom labels and annotation types. In addition, WebAnno supports automatic annotation similar to our approach, also employing machine learning to build models from the data annotated by users. Those models are then used to annotate the remainder of the documents. To accomplish this, WebAnno uses a split-pane view, showing automatic suggestions in one pane and manually entered annotations in another. The user can accept a suggested annotation, which is transferred to the manual pane. Lacking the search capability, WebAnno lists automatic annotations in the running corpus text, which makes it unsuited for selective annotation in large corpora. The approach that we implemented on top of CSniper instead ranks the search results for a given query by confidence of the classifier.
Yet another form of in-tool learning is active learning, as is implemented, e.g., in Dualist (Settles, 2011). In an active learning scenario the system aims to efficiently train an accurate classifier (i.e. with as little training data as possible) and thus repeatedly asks the user to annotate instances from which it can learn the most. Such an approach can work well for reducing the amount of training data needed to produce a model which achieves high accuracy, as has been -amongst others -shown by Hachey et al. (2005). However, they also learn in their experiments that those highly informative instances are often harder to annotate and increase required time and effort of annotators. Our approach is different from active learning as our goal is not to improve the training efficiency of the classifier but rather to allow the user to interactively find and label as many true instances of a phenomenon as possible in a large corpus. Thus, the items presented to the user are not determined by the expected information gain for the classifier but rather by the confidence of the classifier, presenting the user with those instances first which are most likely to be occurrences of the phenomenon in question.
Case study: Finding non-canonical constructions
We demonstrate our novel approach on the task of locating non-canonical constructions (NCC) and conduct an intrinsic evaluation of the accuracy of the system augmented with machine learning output on the data annotated by expert linguists. The linguists annotated sentences for occurrences of certain NCC subtypes: information-packaging constructions (Huddleston andPullum, 2002, pp. 1365ff.), which present information in a different way from their canonical counterparts without changing truth conditions; specifically It-clefts ("It was Peter who made lunch."), NP-preposing ("A treasure, he was searching."), and PP-inversion ("To his left lay the forest.") clauses.
For our experiments, we used the British National Corpus (2007), comprising 100 million words in multiple domains 3 . Constituency parsing was conducted using the factored variant of the Stanford Parser (Klein and Manning, 2003), incorporated into a UIMA pipeline using DKPro Core (Eckart de Castilho and Gurevych, 2014).
As a baseline we use queries representing the experts' intuition about the realization of the NCCs in terms of POS tags and lemmata. We show that our system improves the precision of the query results even with little training data. Also we present run times for our ranking system under real-world conditions for different training set sizes. Further, we compare Krippendorff's α coefficient as an inter-annotator agreement measure among only annotators to the α which treats our system as one additional annotator.
We conducted the experiments based on the manually assigned labels of up to five annotators. If a sentence has been annotated by multiple users, we use the label that has been assigned by the majority; in case of a tie, we ignore the sentence. These so created gold standard annotations were used in an iterative cross-validation setting: for each query and the corresponding annotated sentences we ran nine cross-validation configurations, ranging from a 10/90 split between training and testing data to a 90/10 split, to investigate the reliability of the classifier as well as its ability to achieve usable results with little training data.
For It-clefts, we observe that elaborate queries already have a high precision, on which the SVM improves only marginally. The query
"It" /VCC[] [pos="NP0"]+ /RC[] 4 (it17)
already yields a precision of 0.9598, which does not increase using our method (using 10% as training data, comparing the precision for the remaining 90%). However, while broader queries yield lower precision, the gain by using the SVM becomes significant (Table 1), as exemplified by the precision improvement from 0.4919 to 0.7782 for the following It-cleft query, even at a 10/90 split.
"It" /VCC[] /NP[] /RC[] 5 (it2)
For other inspected types of NCC, even elaborate queries yield a low baseline precision, which our approach can improve significantly. This effect can be observed for example in the following NPpreposing query, where precision can be improved from 0.3946 to 0.5871. We conducted a cursory, "real-world" test regarding the speed of the ranking system. 7 Training the SVM on differently sized subsets of the 449 sentences returned by a test query, we measured the time from clicking the Rank results button until the process was complete and the GUI had updated to reorder the sentences (i.e. including database queries, training, classifying, GUI update). The process times averaged over five "runs" for each training set size (20%, 50% and 90%) amount to 5 seconds, 7 seconds, and 14 seconds respectively. This leaves us with the preliminary impression that our system is fast enough for small to medium Table 1: Precision for various NCC queries (Baseline) and for using the SVM with 10%, 50% and 90% training data. sized training sets; as the last result suggests, for larger sets it would be desirable for our system to be faster overall. One way to achieve this is to pre-compute the feature vectors used in the training phase once -this could be done at the same time with the parsing of the sentences, i.e. at the setup time of the system.
Krippendorff's α, an inter-annotator agreement (IAA) measure which usually assumes values between 0 (no reliable agreement) and 1 (perfect agreement), amounts to 0.8207 averaged over all manually created It-cleft annotations. If we interpret the SVM as an additional annotator (α svm ), the IAA drops to 0.5903. At first glance this seems quite low, however upon closer inspection this can be explained by an overfitting of the classifier. This effect occurs for the already precise baseline queries, where in some cases less than 5% of the query results were labeled as Wrong. The same holds for NP-preposing (α: 0.6574, α svm : 0.3835) and PP-inversion (α: 0.9410, α svm : 0.6964). We interpret this as the classifier being successful in helping the annotators after a brief training phase identifying additional occurrences of particular variants of a phenomenon as covered by the queries, but not easily generalizing to variants substantially different from those covered by the queries.
Conclusion
With automatic ranking we introduced an extension to the annotation-by-query workflow which facilitates manual, selective annotation of large corpora. We explained the benefits of in-tool learning to this task and our extension of an opensource tool to incorporate this functionality. Finally, we showed the applicability of the concept and its implementation to the task of finding noncanonical constructions.
For future work, we plan to speed up the learning process (e.g. by saving feature vectors instead of re-calculating them), and also add the ability for users to configure the features used to train the classifier, e.g. incorporating lemmata or named entities instead of only using the parse tree. Integrating such configuration options in an easily understandable and user-friendly fashion may not be trivial but can help to generalize the approach to support additional kinds of sentence level annotation.
Figure 1 :
1Annotation-by-query workflow extended with a ranking step.
Figure 2 :
2A screenshot showing the results table after the ranking process, with sentences sorted by confidence of the classifier (Score). The results are shown in a keywords-in-context (KWIC) view, separating left context, query match and right context (within a range of one sentence). Clicking on (Correct) changes the label to Correct.
[
pos="N.*"]{1,2} [pos="PNP" & word!="I"] [pos="V.*"] 6 (np55)
It-cleft example query: "It", followed by a form of "to be", an optional determiner and a common noun.
CSniper and the used SVM implementation are language independent, which allowed us to also run additional preliminary tests using German data.
"It", verb clause, one or more proper nouns, relative clause. VCC, NC, and RC are macros we defined in CQP, seeTable 2. 5 "It", verb clause, noun phrase, relative clause. 6 One to two nouns, personal pronoun other than "I", verb. 7 System configuration: Intel i5 2,4 GHz, 2GB RAM, SSD 3GB/s, Linux in a VM
AcknowledgementsWe would like to thank Pia Gerhard, Sabine Bartsch, Gert Webelhuth, and Janina Rado for annotating and testing. Furthermore we would like to thank Janina Rado for creating the CQP macros used in the tests.This work has been supported by the German Federal Ministry of Education and Research (BMBF) under the promotional reference 01UG1416B (CEDIFOR), by the German Institute for Educational Research (DIPF) as part of the graduate program "Knowledge Discovery in Scientific Literature" (KDSL), and by the Volkswagen Foundation as part of the Lichtenberg-Professorship Program under grant No. I/82806.
A broad-coverage collection of portable NLP components for building shareable analysis pipelines. Richard Eckart De Castilho, Iryna Gurevych, Proceedings of the Workshop on OIAF4HLT at COLING 2014. the Workshop on OIAF4HLT at COLING 2014Richard Eckart de Castilho and Iryna Gurevych. 2014. A broad-coverage collection of portable NLP com- ponents for building shareable analysis pipelines. In Proceedings of the Workshop on OIAF4HLT at COLING 2014, pages 1-11.
CSniper: Annotationby-query for Non-canonical Constructions in Large Corpora. Richard Eckart De Castilho, Iryna Gurevych, Sabine Bartsch, Proceedings of ACL 2012. ACL 2012Stroudsburg, PA, USA. ACLSystem DemonstrationsRichard Eckart de Castilho, Iryna Gurevych, and Sabine Bartsch. 2012. CSniper: Annotation- by-query for Non-canonical Constructions in Large Corpora. In Proceedings of ACL 2012, System Demonstrations, pages 85-90, Stroudsburg, PA, USA. ACL.
Twenty-first century corpus workbench: Updating a query architecture for the new millennium. Stefan Evert, Andrew Hardie, Proceedings of CL2011. CL2011Birmingham, UK. Shortcut Expansion VCCpos="VBB" | pos="VBD" | pos="VBZ"]* [lemma="be"]) | ([pos="V.*"]* [pos="VBG" | pos="VBI" | pos="VBN"]* [lemma="beStefan Evert and Andrew Hardie. 2011. Twenty-first century corpus workbench: Updating a query archi- tecture for the new millennium. In Proceedings of CL2011, Birmingham, UK. Shortcut Expansion VCC ([pos="VBB" | pos="VBD" | pos="VBZ"]* [lemma="be"]) | ([pos="V.*"]* [pos="VBG" | pos="VBI" | pos="VBN"]* [lemma="be"])
. NP. pos="AT0"]? []? [pos="AJ.*"]* [pos="N.*"NP [pos="AT0"]? []? [pos="AJ.*"]* [pos="N.*"]
. Rc, pos="DTQ" | pos="PNQ" | pos="CJT"] /VCF[] []?) | ([pos="CJT"]? /NP[] /VCF[] []?) | ([pos="PR.*"]* [pos=".Q"] /NP[] /VCFRC ([pos="DTQ" | pos="PNQ" | pos="CJT"] /VCF[] []?) | ([pos="CJT"]? /NP[] /VCF[] []?) | ([pos="PR.*"]* [pos=".Q"] /NP[] /VCF[] []?)
. Vcf [pos="v.?b" | Pos="v.?d" | Pos=, VCF [pos="V.?B" | pos="V.?D" | pos="V.?Z" | pos="VM0"] [pos="V.*"]*
Table 2: CQP macro expansions for self-defined macros. BNC uses the CLAWS5 tagset for POS tags. Table 2: CQP macro expansions for self-defined macros. BNC uses the CLAWS5 tagset for POS tags (http://www.natcorp.ox.ac.uk/docs/c5spec.html).
Investigating the Effects of Selective Sampling on the Annotation Task. Ben Hachey, Beatrice Alex , Markus Becker, Proceedings of CoNLL 2005. CoNLL 2005Stroudsburg, PA, USA. ACLBen Hachey, Beatrice Alex, and Markus Becker. 2005. Investigating the Effects of Selective Sampling on the Annotation Task. In Proceedings of CoNLL 2005, pages 144-151, Stroudsburg, PA, USA. ACL.
The Cambridge Grammar of the English Language. D Rodney, Geoffrey K Huddleston, Pullum, Cambridge University PressRodney D. Huddleston and Geoffrey K. Pullum. 2002. The Cambridge Grammar of the English Language. Cambridge University Press.
Making large-scale support vector machine learning practical. Thorsten Joachims, Advances in Kernel Methods. Bernhard Schölkopf, Christopher J. C. Burges, and Alexander J. SmolaCambridge, MA, USAMIT PressThorsten Joachims. 1999. Making large-scale sup- port vector machine learning practical. In Bernhard Schölkopf, Christopher J. C. Burges, and Alexan- der J. Smola, editors, Advances in Kernel Methods, pages 169-184. MIT Press, Cambridge, MA, USA.
Accurate Unlexicalized Parsing. Dan Klein, Christopher D Manning, Proceedings of ACL 2003. ACL 2003Stroudsburg, PA, USA. ACLDan Klein and Christopher D. Manning. 2003. Accu- rate Unlexicalized Parsing. In Proceedings of ACL 2003, pages 423-430, Stroudsburg, PA, USA. ACL.
WordFreak: An open tool for linguistic annotation. Thomas Morton, Jeremy Lacivita, Proceedings of NAACL HLT 2003, NAACL-Demonstrations. NAACL HLT 2003, NAACL-DemonstrationsStroudsburg, PA, USA. ACLThomas Morton and Jeremy LaCivita. 2003. WordFreak: An open tool for linguistic annota- tion. In Proceedings of NAACL HLT 2003, NAACL- Demonstrations, pages 17-18, Stroudsburg, PA, USA. ACL.
Making Tree Kernels Practical for Natural Language Learning. Alessandro Moschitti, Proceedings of EACL 2006. EACL 2006Trento, ItalyAlessandro Moschitti. 2006. Making Tree Kernels Practical for Natural Language Learning. In Pro- ceedings of EACL 2006, pages 113-120, Trento, Italy.
Closing the Loop: Fast, Interactive Semi-supervised Annotation with Queries on Features and Instances. Burr Settles, Proceedings of EMNLP 2011. EMNLP 2011Stroudsburg, PA, USA. ACLBurr Settles. 2011. Closing the Loop: Fast, Inter- active Semi-supervised Annotation with Queries on Features and Instances. In Proceedings of EMNLP 2011, pages 1467-1478, Stroudsburg, PA, USA. ACL.
The British National Corpus, version 3 (BNC XML Edition). Distributed by Oxford University Computing Services on behalf of the BNC Consortium. The British National Corpus, version 3 (BNC XML Edition). 2007. Distributed by Oxford University Computing Services on behalf of the BNC Consor- tium. URL: http://www.natcorp.ox.ac.uk/.
Automatic Annotation Suggestions and Custom Annotation Layers in WebAnno. Richard Seid Muhie Yimam, Iryna Eckart De Castilho, Chris Gurevych, Biemann, Proceedings of ACL 2014, System Demonstrations. ACL 2014, System DemonstrationsStroudsburg, PA, USA. ACLSeid Muhie Yimam, Richard Eckart de Castilho, Iryna Gurevych, and Chris Biemann. 2014. Automatic Annotation Suggestions and Custom Annotation Layers in WebAnno. In Proceedings of ACL 2014, System Demonstrations, pages 91-96, Stroudsburg, PA, USA. ACL. |
16,078,731 | ReadME Generation from an OWL Ontology Describing NLP Tools | The paper deals with the generation of ReadME files from an ontology-based description of NLP tool. ReadME files are structured and organised according to properties defined in the ontology. One of the problem is being able to deal with multilingual generation of texts. To do so, we propose to map the ontology elements to multilingual knowledge defined in a SKOS ontology. | [
2923448,
16391747
] | ReadME Generation from an OWL Ontology Describing NLP Tools
September 6th
Driss Sadoun
ERTIM
INALCO
ParisFrance
Satenik Mkhitaryan
ERTIM
INALCO
ParisFrance
Damien Nouvel
ERTIM
INALCO
ParisFrance
Mathieu Valette
ERTIM
INALCO
ParisFrance
ReadME Generation from an OWL Ontology Describing NLP Tools
Proceedings of the 2nd International Workshop on Natural Language Generation and the Semantic Web (WebNLG)
the 2nd International Workshop on Natural Language Generation and the Semantic Web (WebNLG)Edinburgh, ScotlandSeptember 6th
The paper deals with the generation of ReadME files from an ontology-based description of NLP tool. ReadME files are structured and organised according to properties defined in the ontology. One of the problem is being able to deal with multilingual generation of texts. To do so, we propose to map the ontology elements to multilingual knowledge defined in a SKOS ontology.
Introduction
A ReadMe file is a simple and short written document that is commonly distributed along with a computer software, forming part of its documentation. It is generally written by the developer and is supposed to contain basic and crucial information that the user reads before installing and running the software.
Existing NLP software may range from unstable prototypes to industrial applications. Many of them are developed by researchers, in the framework of temporary projects (training, PhD theses, funded projects). As their use is often restricted to their developers, they do not always meet Information technology (IT) requirements in terms of documentation and reusability. This is especially the case for underresourced languages, which are often developed by researchers and released without standard documentation, or written fully or partly in the developer's native language.
Providing a clear ReadMe file is essential for effective software distribution and use: a confusing one could prevent the user from using the software. However, there is no well established guidelines or good practices for writing a ReadMe.
In this paper we propose an ontology-based approach for the generation of ordered and structured ReadMe files for NLP tools. The ontology defines a meta-data model built based on a joint study of NLP tool documentation practices and existing meta-data model for language resources (cf. section 2). Translation functions (TFs) for different languages (currently eight) are associated to ontology properties characterising NLP tools. These TFs are defined within the Simple Knowledge Organization System (SKOS) (cf. section 2.2). The ontology is filled via an on-line platform by NLP experts speaking different languages. Each expert describes the NLP tools processing the languages he speaks (cf. section 3). A ReadMe file is then generated in different languages for each tool described within the ontology (cf. section 3). Figure 1 depicts the whole process of multilingual ReadMe generation.
NLP tools ontology
This work takes place in the framework of the project MultiTal which aims at making NLP tool descriptions available through an online platform, containing factual information and verbose descriptions that should ease installation and use of considered NLP tools. This project involves numerous NLP experts in diverse languages, currently Arabic, English, French, Hindi, Japanese, Mandarin Chinese, Russian, Ukrainian and Tibetan. Our objective is to take advantage of the NLP experts knowledge both to retrieve NLP tools in their languages and to generate multilingual ReadMe files for the retrieved NLP tools. A first step to reach this goal is to propose a conceptual model whose elements are as much independent as possible from the language. Then, associate to each conceptual element, a lexicalisation for each targeted language.
Ontology conceptualisation
In order to conceptualise an ontology that structures and standardises the description of NLP tools we proceeded to a joint study of:
• Documentation for various NLP tools processing aforementioned languages that have been installed and closely tested;
• A large collection (around ten thousands) of structured ReadMe in the Markdown format, crawled from GitHub repositories;
• Meta-data models for Language Resources (LR) as the CMDI (Broeder et al., 2012) or META-SHARE meta-data model ontology (McCrae et al., 2015).
This study gave us guidelines to define bundles of properties sharing a similar semantic. For example, properties referring to the affiliation of the tool (as hasAuthor, hasLaboratory or hasProjet), to its installation or its usage.
We distinguish two levels of meta-data: 1) a mandatory level providing basic elements that constitute a ReadMe file and 2) a nonmandatory level that contains additional information as relations to other tools, fields or methods. These latter serve tools' indexation within the on-line platform. Figure 2 details the major bundles of properties that we conceptualized to describe an NLP tool. The processed languages are defined within the bundle Task. Indeed, an NLP tool may have different tasks which may apply to different languages.
As our ambition is to propose pragmatic descriptions detailing the possible installation and execution procedures, we particularly focused on the decomposition of these procedures into atomic actions.
Multilingual translation functions
Within the ontology, NLP tools are characterised by their properties. Values allocated to these properties are as much as possible independent of the language (date of creation and last update, developer or license names, operating system information, ...). Hence, what needs to be lexicalised is the semantic of each defined property. Each NLP expert associate to each property a translation functions (TFs) that formalise the lexical formulation of the property in the language he speaks. TFs are defined once for each language. The amount of work have not exceeded half a day per language to associate TFs to the around eighty properties of the ontology. In order to ensure a clean separation between the conceptual and the lexical layer, TFs are defined within a SKOS ontology. The SKOS ontology structure is automatically created from the OWL ontology. Thus, adding a new language essentially consists in adding within SKOS TFs in that particular language to each OWL property. Translation functions are of two kinds:
1. P(V 1 ) ; * V 1 *@lang 2. P(V 1 ,V 2 ) ; * V 1 * V 2 * or * V 2 * V 1 * @lang
with P a property, * a set of words that can be empty, V 1 , V 2 values of the property P and @lang an OWL language tag that determines the language in which the property is lexicalised. Below, two examples of tranlation functions for Japanese that have been associated to the properties authorFirstName and download.
• authorFirstName(V 1 ) ;
:
V 1 @jp • download(V 1 ,V 2 ) ; V 2 V 1 - @jp
Natural language generation of multilingual ReadMe files
In our framework, each NLP expert finds, installs and uses available NLP tools processing the language he speaks. Then, he describes every tool that runs correctly via an on-line platform connected to the ontology (cf. Figure 1). Elements of description do not only come from an existing ReadMe as if they exist, they are rarely exhaustive. Hence, experts also gather tool information from the web and during installing and testing each tool. At this step, the OWL ontology is filled and the translated functions of each property are defined within the SKOS ontology. Our aim is to generate ordered and structured ReadMe files in different languages. To do so, we use Natural language generation (NLG) techniques adapted to the Semantic Web (also named Ontology verbalisation) (Staykova, 2014;Bouayad-Agha et al., 2014;Cojocaru and Trãuşan Matu, 2015;Keet and Khumalo, 2016). NLG can be divided in several tasks (Reiter and Dale, 2000;Staykova, 2014). Our approach currently includes: content selection, document structuring, knowledge aggregation, and lexicalisation. The use of more advanced tasks as referring expression aggregation, linguistic realisation and structure realisation is in our perspectives.
Ontology content selection and structuring
Unlike the majority of ontology verbalisation approaches, we do not intend to verbalise the whole content of the ontology. We simply verbalize properties and their values that characterise a pertinent information that have to appear in a ReadMe file. The concerned properties are those which belong to the mandatory level (cf. section 2.1). The structure of ReadMe files is formalized within the ontology. First, ReadMe files are organised in sections based on bundles of properties defined in the ontology (cf. Figure 2). Within each section, the order of property is predefined. Both installation and execution procedures are decomposed to their atomic actions. These actions are automatically numbered according to their order of execution (cf. Figure 3). Different installation and execution procedures may exist according the operat-ing system (Linux, Windows, ...), architecture (32bits, 64bites, 86bits, ...), language platform (JAVA 8, Python 3, ...) and so on. As well, execution procedures depend on tasks the NLP tool performs and the languages it processes. Thus, each procedure is distinguished and its information grouped under its heading. Moreover, execution procedures are also ordered as an NLP tool may have to perform tasks in a particular ordered sequence. This structuring is part of the ontology conceptualisation. It consists in defining property and sub-property relations and in associating a sequence number to each property that has to be lexicalised.
Ontology content aggregation and lexicalisation
Following the heuristics proposed in (Androutsopoulos et al., 2014) and (Cojocaru and Trãuşan Matu, 2015) to obtain concise text, OWL property values are aggregated when they characterise the same object. For example, if an execution procedure (ep i ) has two values for operating system (ex : Linux and Mac) then the two values are merged as the following below: hasOS(ep i ,Linux) ∧ hasOS(ep i ,Mac) ⇒ hasOS(ep i ,Linux and Mac) The last step consists in property lexicalisation. While a number of approaches rely on ontology elements' names and labels (often in English) to infer a lexicalisation (Bontcheva, 2005;SUN and MELLISH, 2006;Williams et al., 2011), in our approach, the lexicalisation of properties depend only on their translation functions. During the ontology verbalisation, each targeted language is processed one after the other. The TF of encountered properties for the current language is retrieved and used to lexicalise the property. Property values are considered as variables of the TFs. They are not translated as we ensure that they are as much as possible independent of the language. Figure 3 gives an example of two installation procedures for the NLP tool Jieba that processes Chinese. In this example, actions are lexicalised in English. Furthermore, the lexicalised command lines appear in between brackets.
As a result of this generation, all ReadMe files have the same structure, organisation and, as much as possible, level of detail, especially regarding installation and execution procedures which represent the key information for a tool usage. The resulted texts are simple which suits a ReadMe. However, it could be valuable to use more advanced NLG techniques as referring expression aggregation, linguistic realisation and structure realisation to produce more less simplified natural language texts.
Conclusion
We proposed an ontology-based approach for generating simple, structured and organised ReadMe files in different languages. Readme structuring and lexicalisation is guided by the ontology properties and their associated translation functions for the targeted languages. The generated ReadMes are intended to be accessible via an on-line platform. This platform documents in several languages NLP tools processing different languages. In a near future, we plan to evaluate the complexity for end-users of different level of expertise to install and execute NLP tools using our generated ReadMe files. We also hope that, as a side-product, the proposed conceptualisation may provide a starting point to establish guidelines and best practices that NLP tool documentation often lacks, especially for under-resourced languages.
Figure 1 :
1ReadMe generation process
Figure 2 :
2Bundles of properties representing ReadMe sections
Figure 3 :
3Two installation procedures of the NLP tool Jieba lexicalised in English.
Procedure name: wget -ubuntu 1-download jieba-0.38.zip via wget (wget https://pypi.python.org/packages/f6/86 /9e721cc52075a07b7d07eb12bcb5dde771d35332a 3dae1e14ae4290a197a/jieba-0.38.zip) 2-unzip jieba-0.38.zip (unzip jieba-0.38.zip) 3-go to the directory jieba-0.38 (cd jieba-0.38/) 4-type the command: python setup.py install Procedure name: pip -ubuntu 1 -type the command: sudo pip install jieba
Generating natural language descriptions from OWL ontologies: the naturalowl system. Gerasimos Lampouras, and Dimitrios Galanis. 6164CoRR, abs/1405Ion Androutsopoulos, Gerasimos Lampouras, and Dimitrios Galanis. 2014. Generating natural language descriptions from OWL on- tologies: the naturalowl system. CoRR, abs/1405.6164.
Kalina Bontcheva, The Semantic Web: Research and Applications: Second European Semantic Web Conference, ESWC, chapter Generating Tailored Textual Summaries from Ontologies. Kalina Bontcheva, 2005. The Semantic Web: Research and Applications: Second European Semantic Web Conference, ESWC, chapter Generating Tailored Textual Summaries from Ontologies, pages 531-545.
Natural language generation in the context of the semantic web. Nadjet Bouayad-Agha, Gerard Casamayor, Leo Wanner, Semantic Web. 5Nadjet Bouayad-Agha, Gerard Casamayor, and Leo Wanner. 2014. Natural language gen- eration in the context of the semantic web. Semantic Web, 5(6):493-513.
Standardizing a component metadata infrastructure. Daan Broeder, Dieter Van Uytvanck, Maria Gavrilidou, Thorsten Trippel, Menzo Windhouwer, LREC. Daan Broeder, Dieter Van Uytvanck, Maria Gavrilidou, Thorsten Trippel, and Menzo Windhouwer. 2012. Standardizing a com- ponent metadata infrastructure. In LREC, pages 1387-1390.
Text generation starting from an ontology. Alexandru Dragoş, Ştefan Trãuşan Cojocaru, Matu, Proceedings of the Romanian National Human-Computer Interaction Conference -RoCHI. the Romanian National Human-Computer Interaction Conference -RoCHIDragoş Alexandru Cojocaru and Ştefan Trãuşan Matu. 2015. Text generation starting from an ontology. In Proceedings of the Romanian National Human-Computer Interaction Conference -RoCHI, pages 55-60.
Toward a knowledge-to-text controlled natural language of isizulu. Language Resources and Evaluation. C , Maria Keet, Langa Khumalo, C. Maria Keet and Langa Khumalo. 2016. To- ward a knowledge-to-text controlled natural language of isizulu. Language Resources and Evaluation, pages 1-27.
ESWC (Satellite Events), chapter One Ontology to Bind Them All: The META-SHARE OWL Ontology for the Interoperability of Linguistic Datasets on the Web. John P Mccrae, Penny Labropoulou, Jorge Gracia, Marta Villegas, Víctor Rodríguez-Doncel, and Philipp CimianoJohn P. McCrae, Penny Labropoulou, Jorge Gracia, Marta Villegas, Víctor Rodríguez- Doncel, and Philipp Cimiano, 2015. ESWC (Satellite Events), chapter One Ontology to Bind Them All: The META-SHARE OWL Ontology for the Interoperability of Linguis- tic Datasets on the Web, pages 271-282.
Building Natural Language Generation Systems. Ehud Reiter, Robert Dale, Cambridge University PressEhud Reiter and Robert Dale. 2000. Build- ing Natural Language Generation Systems. Cambridge University Press.
Natural language generation and semantic technologies. Kamenka Staykova, Cybernetics and Information Technologies. 142Kamenka Staykova. 2014. Natural lan- guage generation and semantic technologies. Cybernetics and Information Technologies, 14(2):3-23.
Domain independent sentence generation from rdf representations for the semantic web. Xiantang Sun, Mellish Chris, Combined Workshop on Language-Enabled Educational Technology and Development and Evaluation of Robust Spoken Dialogue Systems, European Conference on AI. Xiantang SUN and Chris MELLISH. 2006. Do- main independent sentence generation from rdf representations for the semantic web. In Combined Workshop on Language-Enabled Educational Technology and Development and Evaluation of Robust Spoken Dialogue Systems, European Conference on AI.
Levels of organisation in ontology verbalisation. Sandra Williams, Allan Third, Richard Power, 13th European Workshop on Natural Language Generation. Proceedings of the 13th ENLGSandra Williams, Allan Third, and Richard Power. 2011. Levels of organisation in ontol- ogy verbalisation. In 13th European Work- shop on Natural Language Generation, pages 158-163. Proceedings of the 13th ENLG. |
49,530,386 | Predicting Human Metaphor Paraphrase Judgments with Deep Neural Networks | We propose a new annotated corpus for metaphor interpretation by paraphrase, and a novel DNN model for performing this task. Our corpus consists of 200 sets of 5 sentences, with each set containing one reference metaphorical sentence, and four ranked candidate paraphrases. Our model is trained for a binary classification of paraphrase candidates, and then used to predict graded paraphrase acceptability. It reaches an encouraging 75% accuracy on the binary classification task, and high Pearson (.75) and Spearman (.68) correlations on the gradient judgment prediction task. | [
646594,
7380676,
7929514,
44530157,
1870512,
989439
] | Predicting Human Metaphor Paraphrase Judgments with Deep Neural Networks
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 6. 2018. 2018
Yuri Bizzoni
University of Gothenburg
Shalom Lappin
University of Gothenburg
Predicting Human Metaphor Paraphrase Judgments with Deep Neural Networks
Proceedings of the Workshop on Figurative Language Processing
the Workshop on Figurative Language ProcessingNew Orleans, LouisianaAssociation for Computational LinguisticsJune 6. 2018. 2018
We propose a new annotated corpus for metaphor interpretation by paraphrase, and a novel DNN model for performing this task. Our corpus consists of 200 sets of 5 sentences, with each set containing one reference metaphorical sentence, and four ranked candidate paraphrases. Our model is trained for a binary classification of paraphrase candidates, and then used to predict graded paraphrase acceptability. It reaches an encouraging 75% accuracy on the binary classification task, and high Pearson (.75) and Spearman (.68) correlations on the gradient judgment prediction task.
Introduction
Metaphor is an increasingly studied phenomenon in computational linguistics. But while metaphor detection has received considerable attention in the NLP literature (Dunn et al., 2014;Veale et al., 2016) and in corpus linguistics (Krennmayr, 2015) in recent years, not much work has focused on the task of metaphor paraphrasing -assigning an appropriate interpretation to a metaphorical expression. Moreover, there are few (if any) annotated corpora of metaphor paraphrases (Shutova and Teufel, 2010). The main papers in this area are Shutova (2010), and Bollegala and Shutova (2013). The first applies a supervised method combining WordNet and distributional word vectors to produce the best paraphrase of a single verb used metaphorically in a sentence. The second approach, conceptually related to the first, builds an unsupervised system that, given a sentence with a single metaphorical verb and a set of potential paraphrases, selects the most accurate candidate through a combination of mutual information scores and distributional similarity.
Despite the computational and linguistic interest of this task, little research has been devoted to it.
Some quantitative analyses of figurative language have involved metaphor interpretation and paraphrasing. These focus on integrating paraphrase into automatic Textual Entailment frames (Agerri, 2008), to explore the properties of distributional semantics in larger-than-word structures (Turney, 2013). Alternatively, they study the sentiment features of metaphor usage (Mohammad et al., 2016;Kozareva, 2015). This last aspect of figurative interpretation is considered a particularly hard task and has generated several approaches
The task of metaphor interpretation is a particular case of paraphrase detection, although this characterization is not unproblematic, as we will see in Section 6.
In Bollegala and Shutova (2013), metaphor paraphrase is treated as a ranking problem. Given a metaphorical usage of a verb in a short sentence, several candidate literal sentences are retrieved from the Web and ranked. This approach requires the authors to create a gradient score to label their paraphrases, a perspective that is now gaining currency in broader semantic similarity tasks Agirre et al., 2016). Mohammad et al. (2016) resort to metaphor paraphrasing in order to perform a quantitative study on the emotions associated with the usage of metaphors. They create a small corpus of paraphrase pairs formed from a metaphorical expression and a literal equivalent. They ask candidates to judge the degree of "emotionality" conveyed by the metaphorical and the literal expressions. While the study has shown that metaphorical paraphrases are generally perceived as more emotionally charged than their literal equivalents, a corpus of this kind has not been used to train a computational model for metaphor paraphrase scoring.
In this paper we present a new dataset for metaphor paraphrase identification and ranking. In our corpus, paraphrase recognition is treated as an ordering problem, where sets of sentences are ranked with respect to a reference metaphor sentence.
The main difference with respect to existing work in this field consists in the syntactic and semantic diversity covered by our dataset. The metaphors in our corpus are not confined to a single part of speech. We introduce metaphorical examples of nouns, adjectives, verbs and a number of multi-word metaphors.
Our corpus is, to the best of our knowledge, the largest existing dataset for metaphor paraphrase detection and ranking.
As we describe in Section 2, it is composed of groups of five sentences: one metaphor, and four candidates that can be ranked as its literal paraphrases.
The inspiration for the structure of our dataset comes from a recent work on paraphrase (Bizzoni and Lappin, 2017), where a similarly organized dataset was introduced to deal with paraphrase detection.
In our work, we use an analogous structure to model metaphor paraphrase. Also, while Bizzoni and Lappin (2017) present a corpus annotated by a single human, each paraphrase set in our corpus was judged by 20 different Amazon Mechanical Turk (AMT) annotators, making the grading of our sentences more robust and reliable (see Section 2.1).
We use this corpus to test a neural network model formed by a combination of Convolutional Neural Networks (CNNs) and Long Short Term Memory Recurrent Neural Networks (LSTM RNNs). We test this model on two classification problems: (i) binary paraphrase classification and (ii) paraphrase ranking. We show that our system can achieve significant correlation with human judgments on the ranking task as a byproduct of supervised binary learning. To the best of our knowledge, this is the first work in metaphor paraphrasing to use supervised gradient representations.
A New Corpus for Metaphor Paraphrase Evaluation
We present a dataset for metaphor paraphrase designed to allow users to rank non-metaphorical candidates as paraphrases of a metaphorical sentence or expression. Our corpus is formed of 200 sets of five sentence paraphrase candidates for a metaphorical sentence or expression. 1 In each set, the first sentence contains a metaphor, and it provides the reference sentence to be paraphrased. The remaining four sentences are labeled on a 1-4 scale based on the degree to which they paraphrase the reference sentence. This is on analogy with the annotation frame used for SemEval Semantic Similarity tasks (Agirre et al., 2016). Broadly, our labels represent the following categories:
1 Two sentences cannot be considered paraphrases.
2 Two sentences cannot be considered paraphrase, but they show a degree of semantic similarity.
3 Two sentences could be considered paraphrases, although they present some important difference in style or content (they are not strong paraphrases).
4 Two sentences are strong paraphrases.
On average, every group of five sentences contains a strong paraphrase, a loose paraphrase and two non-paraphrases, one of which may use some relevant words from the metaphor in question. 2 The following examples illustrate these ranking labels.
• Metaphor: The crowd was a river in the street -The crowd was large and impetuous in the street. Score: 4 -There were a lot of people in the street.
Score: 3 -There were few people in the street.
Score: 2 -We reached a river at the end of the street. Score: 1
We believe that this annotation scheme is useful. While it sustains graded semantic similarity labels, it also provides sets of semantically related elements, each one of which can be scored or ordered independently of the others. Therefore, the metaphorical sentence can be tested separately for each literal candidate in the set in a binary classification task.
In the test phase, the annotation scheme allows us to observe how a system represents the similarity between a metaphorical and a literal sentence by taking the scores of two candidates as points of relative proximity to the metaphor.
It can be argued that a good literal paraphrase of a metaphor needs to compensate to some extent for the expressive or sentimental bias that a metaphor usually supplies, as argued in Mohammad et al. (2016). In general a binary classification can be misleading because it conceals the different levels of similarity between competing candidates.
For example, the literal sentence Republican candidates during the convention were terrible can be considered to be a loose paraphrase of the metaphor The Republican convention was a horror show, or alternatively, as a semantically related non-paraphrase. Which of these conclusions we adopt depends on our decision concerning how much interpretative content a literal sentence needs to provide in order to qualify as a valid paraphrase of a metaphor. The question whether the two sentences are acceptable paraphrases or not can be hard to answer. By contrast, it would be far fetched to suggest that The Republican convention was a joy to follow is a better or even equally strong literal paraphrase for The Republican convention was a horror show.
In this sense, the sentences Her new occupation was a dream come true and She liked her new occupation can be considered to be loose paraphrases, in that the term liked can be judged an acceptable, but not ideal interpretation of the more intense metaphorical expression a dream come true. By contrast, She hated her new occupation cannot be plausibly regarded as more similar in meaning than She liked her new occupation to Her new occupation was a dream come true.
Our training dataset is divided into four main sections:
1. Noun phrase Metaphors : My lawyer is an angel.
Adjective Metaphors :
The rich man had a cold heart.
3. Verb Metaphors : She cut him down with her words.
Multi-word Metaphors :
The seeds of change were planted in 1943.
All these sentences and their candidates were manually produced to insure that for each group we have a strong literal paraphrase, a loose literal paraphrase and two semantically related nonparaphrases. Here "semantically related" can indicate either a re-use of the metaphorical words to express a different meaning, or an unacceptable interpretation of the reference metaphor.
Although the paraphrases were generated freely and cover a number of possible (mis)interpretations, we did take several issues into account. For example, for sentiment related metaphors two opposite interpretations are often proposed, forcing the system to make a choice between two sentiment poles when ranking the paraphrases (I love my job -I hate my job for My job is a dream). In general, antonymous interpretations (Time passes very fast -Time is slow for Time flies) are listed, when possible, among the four competing choices.
Our corpus has the advantage of being suitable for both binary classification and gradient paraphrase judgment prediction. For the former, we map every score over a given gradient threshold label to 1, and scores below that threshold to 0. For gradient classification, we use all the scoring labels to test the correlation between the system's ordered predictions and human judgments. We will show how, once a model has been trained for a binary detection task, we can evaluate its performance on the gradient ordering task.
We stress that our corpus is under development. As far as we know it is unique for the kind of task we are discussing. The main difficulty in building this corpus is that there is no obvious way to collect the data automatically. Even if there were a procedure to extract pairs of paraphrases containing a metaphoric element semi-automatically, it does not seem possible to generate alternative paraphrase candidates automatically.
The reference sentences we chose were either selected from published sources or created manually by the authors. In all cases, the paraphrase candidates had to be crafted manually. We tried to keep a balanced diversity inside the corpus. The dataset is divided among metaphorically used Nouns, Adjectives and Verbs, plus a section of Multi Word metaphors. The corpus is an attempt to represent metaphor in different parts of speech.
A native speaker of English independently checked all the sentences for acceptability.
Collecting judgments through AMT
Originally, one author individually annotated the entire corpus. The difference between strong and loose literal paraphrases can be a matter of individual sensibility.
While such annotations could be used as the basis for a preliminary study, we needed more judgments to build a statistically reliable annotated dataset. Therefore we used crowd sourcing to solicit judgments from large numbers of annotators. We collected human judgments on the degree of paraphrasehood for each pair of sentences in a set (with the reference metaphor sentence in the pair) through Amazon Mechanical Turk (AMT).
Annotators were presented with four metaphor -candidate paraphrase pairs, all relating to the same metaphor. They were asked to express a judgment between 1 and 4, according to the scheme given above.
We collected 20 human judgments for each pair metaphor -candidate paraphrase. Analyzing individual annotators' response patterns, we were able to filter out a small number of "rogue" annotators (less than 10%). This filtering process was based on annotators' answers to some control elements inserted in the corpus, and evaluation of their overall performance. For example, an annotator who consistently assigned the same score to all sentences is classified as "rogue".
We then computed the mean judgment for each sentence pair and compared it with the original judgments expressed by one of the authors. We found a high Pearson correlation between the annotators' mean judgments and the author's judgment of close to 0.93.
The annotators' understanding of the problem and their evaluation of the sentence pairs seem, on average, to correspond very closely to that of our original single annotator. The high correlation also suggests a small level of variation from the mean across AMT annotators. Finally, a similar correlation strengthens the hypothesis that paraphrase detection is better modeled as an ordering, rather than a binary, task. If this had not been the case, we would expect more polarized judgments tending towards the highest and lowest scores, instead of the more evenly distributed judgment patterns that we observed.
These mean judgments appear to provide reliable data for supervision of a machine learning model. We thus set the upper bound for the performance of a machine learning algorithm trained on this data to be around .9, on the basis of the Pearson correlation with the original single annotator scores. In what follows, we refer to the mean judgments of AMT annotators as our gold standard when evaluating our results, unless otherwise indicated.
A DNN for Metaphor Paraphrase Classification
For classification and gradient judgment prediction we constructed a deep neural network. Its architecture consists of three main components:
1. Two encoders that learn the representation of two sentences separately 2. A unified layer that merges the output of the encoders 3. A final set of fully connected layers that operate on the merged representation of the two sentences to generate a judgment.
The encoder for each pair of sentences taken as input is composed of two parallel Convolutional Neural Networks (CNNs) and LSTM RNNs, feeding two sequenced fully connected layers. We use an "Atrous" CNN (Chen et al., 2016). Interestingly, classical CNNs only decrease our accuracy by approximately two points and reach a good F1 score, as Table 1 indicates.
Using a CNN (we apply 25 filters of length 5) as a first layer proved to be an efficient strategy. While CNNs were originally introduced in the field of computer vision, they have been successfully applied to problems in computational semantics, such as text classification and sentiment analysis (Lai et al., 2015), as well as to paraphrase recognition (Socher et al., 2011). In NLP applications, CNNs usually abstract over a series of wordor character-level embeddings, instead of pixels. In this part of our model, the encoder learns a more compact representation of the sentence, with reduced vector space dimensions and features. This permits the entire DNN to focus on the information most relevant to paraphrase identification.
The output of each CNN is passed through a max pooling layer to an LSTM RNN. Since the CNN and the max pooling layer perform discriminative reduction of the input's dimensions, we can run a relatively small LSTM RNN model (20 hidden units). In this phase, the vector dimensions of the sentence representation are further reduced, with relevant information conserved and highlighted, particularly for the sequential structure of the data. Each encoder is completed by two successive fully connected layers, of dimensions 15 and 10 respectively, the first one having a 0.5 dropout rate. Figure 1: Example of an encoder. Input is passed to a CNN, a max pooling layer, an LSTM RNN, and finally two fully connected layers, the first having a dropout rate of .5. The input's and output's shape is indicated in brackets for each layer Each sentence is thus transformed to a 10 dimensional vector. To perform the final comparison, these two low dimensional vectors are passed to a layer that merges them into a single vector. We tried several ways of merging the encoders' outputs, and we found that simple vector concatenation was the best option. We produce a 20 dimensional two-sentence vector as the final output of the DNN.
We do not apply any special mechanism for "comparison" or "alignment" in this phase. To measure the similarity of two sequences our model makes use only of the information contained in the merged vector that the encoders produce. We did not use a device in the merging phase to assess similarity between the two sequences. This allows a high degree of freedom in the interpretation patterns we are trying to model, but it also involves a fair amount of noise, which increases the risk of error.
The merging layer feeds the concatenated input to a final fully connected layer. The last layer applies a sigmoid function to produce the judgments. The advantage of using a sigmoid function in this case is that, while it performs well for binary classification, it returns a gradient over its input, thus generating an ordering of values appropriate for the ranking task. The combination of these three kinds of Neural Networks in this order (CNN, LSTM RNN and fully connected layers) has been explored in other works, with interesting results (Sainath et al., 2015). This research has indicated that these architectures can complement each other in complex semantic tasks, such as sentiment analysis (Wang et al., 2016) and text representation (Vosoughi et al., 2016).
The fundamental idea here is that these three kinds of Neural Network capture information in different ways that can be combined to achieve a better global representation of sentence input. While a CNN can reduce the spectral variance of input, an LSTM RNN is designed to model its sequential temporal dimension. At the same time, an LSTM RNN's performance can be strongly improved by providing it with better features (Pascanu et al., 2014), such as the ones produced by a CNN, as happens in our case. The densely connected layers contribute a clearer, more separable final vector representation of one sentence.
To encode the original sentences we used Word2Vec embeddings pre-trained on the very large Google News dataset (Mikolov et al., 2013). We used these embeddings to create the input sequences for our model.
We take as a baseline for evaluating our model the cosine similarity of the sentence vectors, obtained through combining their respective pretrained lexical embeddings. This baseline gives very low accuracy and F1 scores.
Binary Classification Task
As discussed above, our corpus can be applied to model two sub-problems: binary classification and paraphrase ordering.
To use our corpus for a binary classification task we map each set of five sentences into a series of pairs, where the first element is the metaphor we want to interpret and the second element is one of its four literal candidates. Gradient labels are then replaced by binary ones. We consider all labels higher than 2 as positive judgments (Paraphrase) and all labels less than or equal to 2 as negative judgments (Non-Paraphrase), reflecting the ranking discussed in Section 2. We train our model with these labels for a binary metaphor paraphrase detection task.
Keeping the order of the input fixed (we will discuss this issue below), we ran the training phase for 15 epochs.
We reached an average accuracy of 67% for 12 fold cross-validation.
Interestingly, when trained on the pre-defined training set only, our model reaches the higher accuracy of 75%.
We strongly suspect that this discrepancy in performance is due to the small training and test sets created by the partitions of the 12 fold cross validation process.
In general, this task is particularly hard, both because of the complexity of the semantic properties involved in accurate paraphrase (see 4.1), and the limited size of the training set. It seems to us that an average accuracy of 67% on a 12 fold partitioning of training and test sets is a reasonable result, given the size of our corpus.
We observe that our architecture learned to recognize different semantic phenomena related to metaphor interpretation with a promising level of accuracy, but such phenomena need to be represented in the training set.
In light of the fact that previous work in this field is concerned with single verb paraphrase ranking (Bollegala and Shutova, 2013), where the metaphorical element is explicitly identified, and the candidates don't contain any syntacticsemantic expansion, our results are encouraging. 3 Although a small corpus may cause instability in results, our DNN seems able to generalize with relative consistency on the following patterns:
• Sentiment. My life in California was a nightmare -My life in California was terrible. Our system seems able to discriminate the right sentiment polarity of a metaphor by picking the right paraphrase, even when some candidates contain sentiment words of opposite polarity, which are usually very similar in a distributional space
• Non metaphorical word re-use. Our system seems able, in several cases, to discriminate the correct paraphrase for a metaphor, even when some candidates re-use the words of the metaphor to convey a (wrong) literal meaning. My life in California was a dream -I lived in California and had a dream
• Cases of multi-word metaphor Although well represented in our corpus, multi-word metaphors are in some respects the most difficult to correctly paraphrase, since the interpretation has to be extended to a number of words. Nonetheless, our model was able to correctly handle these in a number of situations. You can plant the seeds of anger -You can act in a way that will engender rage However, our model had trouble with several others cases.
It seems to have particular difficulty in discriminating sentiment intensity, with assignment of higher scores to paraphrases that value the sentiment intensity of the metaphor, which creates problems in several instances. Also, cases of metaphoric exaggeration (My roommate is a sport maniac -My roommate is a sport person), negation (My roommate was not an eagle -My roommate was dumb.) and syntactic inversions pose difficulties for our models.
We found that our model is able to abstract over specific patterns, but, predictably, it has difficulty in learning when the semantic focus of an interpretation consists in a phrase that is under represented in the training data.
In some cases, the effect of data scarcity can be observed in an "overfit weighting" of specific terms. Some words that were seen in the data only once are associated with a high or low score independently of their context, degrading the overall performance of the model. We believe that these idiosyncrasies, can be overcome through training on a larger data set.
The gray areas of interpretation
We observe that, on occasion, the model's errors fall into a gray area between clear paraphrase and clear non-paraphrase. Here the correctness of a label is not obvious.
These cases are particularly important in metaphor paraphrasing, since this task requires an interpretative leap from the metaphor to its literal equivalent. For example, the pair I was home watching the days slip by from my window -I was home thinking about the time I was wasting can be considered as a loose paraphrase pair. Alternatively, it can be regarded as a case of nonparaphrase, since the second element introduces some interpretative elements (I was thinking about the time) that are not in the original.
In our test set we labeled it as 3 (loose paraphrase), but if our system fails to label it correctly in a binary task, it is not entirely clear that it is making an error. For these cases, the approach presented in the next section is particularly useful.
Paraphrase Ordering Task
The high degree of correlation we found between the AMT annotations and our single annotator's judgments indicate that we can use this dataset for an ordering task as well. Since the human judgments we collected about the "degree of paraphrasehood" are quite consistent, it is reasonable to pursue a non-binary approach.
Once the DNN has learned representations for binary classification, we can apply it to rank the sentences of the test set in order of similarity.
We apply the sigmoid value distribution for the candidate sentences in a set of five (the reference and four candidates) to determine the ranking.
To do this we use the original structure of our dataset, composed of sets of five sentences. First, we assign a similarity score to all pairs of sentences (reference sentence and candidate para-phrase) in a set. This is the similarity score learned in the binary task, so it is determined by the sigmoid function applied on the output.
The following is an example of an ordered set with strong correlation between the model's predictions (marked in bold) and our annotations (given in italics)
• The candidate is a fox -0.13 1 The candidate owns a fox -0.30 2 The candidate is stupid -0.41 3 The candidate is intelligent -0.64 4 The candidate is a cunning person
We compute the average Pearson and Spearman correlations on all sets of the test corpus, to check the extent to which the ranking that our DNN produces matches our mean crowd source human annotations.
While Pearson correlation measures the relationship between two continuous variables, Spearman correlation evaluates the monotonic relation between two variables, continuous or ordinal.
Since the first of our variables, the model's judgment, is continuous, while the second one, the human labels, is ordinal, both measures are of interest.
We found comparable and meaningful correlations between mean AMT rankings and the ordering that our model predicts, on both metrics. On the balanced training and test set, we achieve an average Pearson correlation of 0.75 and an average Spearman correlation of 0.68. On a twelve fold cross-validation frame, we achieve an average Pearson correlation of 0.55 and an average Spearman correlation of 0.54. We chose a twelve fold cross-validation because it is the smallest partition we can use to get meaningful results. We conjecture that the average cross fold validation performance is lower because of the small size of the training data in each fold. These results are displayed in Table 2. 4 These correlations indicate that our model achieves an encouraging level of accuracy in predicting our gradient annotations for the candidate sentences in a set when trained for a binary classification task.
This task differs from the binary classification task in several important respects. In one way, it is easier. A non-paraphrase can be misjudged as a paraphrase and still appear in the right order within a ranking. In another sense, it is more difficult. Strict paraphrases, loose paraphrases, and various kinds of semantically similar nonparaphrases have to be ordered in accord with human judgment patterns, which is a more complex task than simple binary classification.
We should consider to what extent this task is different from a multi-class categorization problem. Broadly, multi-class categorization requires a system for linking a pair of sentences to a specific class of similarity. This is dependent upon the classes defined by the annotator and presented in the training phase. In several cases determining these ranked categories might be problematic. A class corresponding to our label "3", for example, could contain many different phenomena related to metaphor paraphrase: expansions, reformulations, reduction in the expressivity of the sentence, or particular interpretations of the metaphor's meaning. Our way of formulating the ordering task allows us to overcome this problem. A paraphrase containing an expansion and a paraphrase involving some information loss, both labeled as "3", might have quite different scoring, but they still fall between all "2" elements and all "4" elements in a ranking.
We can see that our gradient ranking system provides a more nuanced view of the paraphrase relation than a binary classification.
Consider the following example:
• My life in California was a dream -0.03 1 I had a dream once -0.05 2 While living in California I had a dream -0.11 3 My life in California was nice, I enjoyed it -0.58 4 My life in California was absolutely great
The human annotators consider the pair My life in California was a dream -My life in California was nice, I enjoyed it as loose paraphrases, while the model scored it very low. But the difference in sentiment intensity between the metaphor and the literal candidate renders the semantic relation between the two sentences less than perspicuous. Such intensity is instead present in My life in California was absolutely great, marked as a more valid paraphrase (score 4). On the other hand, it is clear that in the choice between While living in California I had a dream and My life in California was nice, I enjoyed it, the latter is a more reasonable interpretation of the metaphor.
The annotators relative mean ranking has been sustained by our model, even if its absolute scoring involves an error in binary classification.
The correlation between AMT annotation ordering and our model's predictions is a by-product of supervised binary learning. Since we are reusing the predictions of a binary classification task, we consider it a form of transfer learning from a supervised binary context to an unsupervised ordering task. In this case, our corpus allows us to perform double transfer learning. First, we used pretrained word embeddings trained to maximize single words' contextual similarity, in order to train on a supervised binary paraphrase dataset. Then, we use the representations acquired in this way to perform an ordering task for which the DNN had not been trained.
The fact that ranked correlations are sustained through binary paraphrase classification is not an obvious result. In principle, a model trained on {0,1} labels could "polarize" its scores to the point where no meaningful ordering would be available. Had this happened, a good performance in a binary task would actually conceal the loss of important semantic information. The fact that there is no necessary connection between binary classification and prediction of gradient labels, and that an increase in one can even produce a loss in the other, is pointed out in , who discuss the relation of paraphrase identification to the recognition of semantic similarity.
The Nature of the Metaphor Interpretation Task
Although this task resembles a particular case of paraphrase detection, in many respects it is something different. While paraphrase detection concerns learning content identity or strong cases of semantic similarity, our task involves the interpretation of figurative language. In a traditional paraphrase task, we should maintain that "The candidate is a fox" and "The candidate is cunning" are invalid paraphrases. First, the superficial informational content of the two sentences is different. Second, without further context we might assume that the candidate is an actual fox. We ignore the context of the phrase.
In this task the frame is different. We assume that the first sentence contains a metaphor. We summarize this task by the following question.
Given that X is a metaphor, which one of the given candidates would be its best literal interpretation?
We trained our model to move along a similar learning pattern. This training frame can produce the apparent, but false paradox that two acceptable paraphrases such as The Council is on fire and The Council is burning are assigned a low score by our model. If the first element is a metaphor, the second element is, in fact, a bad literal interpretation. A higher score is correctly assigned to the candidate People in the Council are very excited.
Conclusions
We present a new kind of corpus to evaluate metaphor paraphrase detection, following the approach presented in Bizzoni and Lappin (2017) for paraphrase grading, and we construct a novel type of DNN architecture for a set of metaphor interpretation tasks. We show that our model learns an effective representation of sentences, starting from the distributional representations of their words. Using word embeddings trained on very large corpora proved to be a fruitful strategy. Our model is able to retrieve from the original semantic spaces not only the primary meaning or denotation of words, but also some of the more subtle semantic aspects involved in the metaphorical use of terms.
We based our corpus' design on the view that paraphrase ranking is a useful way to approach the metaphor interpretation problem.
We show how this kind of corpus can be used for both supervised learning of binary classification, and for gradient judgment prediction.
The neural network architecture that we propose encodes each sentence in a 10 dimensional vector representation, combining a CNN, an LSTM RNN, and two densely connected neural layers. The two input representations are merged through concatenation and fed to a series of densely connected layers.
We show that such an architecture is able, to an extent, to learn metaphor-to-literal paraphrase.
While binary classification is learned in the training phase, it yields a robust correlation in the ordering task through the softmax sigmoid distributions generated for binary classification. The model learns to classify a sentence as a valid or invalid literal interpretation of a given metaphor, and it retains enough information to assign a gradient value to sets of sentences in a way that correlates with our crowd source annotation.
Our model doesn't use any "alignment" of the data. The encoders' representations are simply concatenated. This gives our DNN considerable flexibility in modeling interpretation patterns. It can also create complications where a simple alignment of two sentences might suffice to identify a similarity. We have considered several possible alternative versions of this model to tackle this issue.
In future we will expand the size and variety of our corpus. We will perform a detailed error analysis of our model's predictions, and we will further explore different kinds of neural network designs for paraphrase detection and ordering. Finally, we intend to study this task "the other way around" by detecting the most appropriate metaphor to paraphrase a literal reference sentence or phrase.
Table 1 :
1Accuracy for different versions of the model,
and the baseline. Each version ran on our standard
train and test data, without performing cross-validation.
We use as a baseline the cosine similarity between the
mean of the word vectors composing each sentence.
Table 2 :
2Accuracy and ranking correlation for Twelve Fold Cross-Validation. It can be seen that the simple cosine similarity between the mean vectors of the two sentences, which we use as baseline, returns a low correlation with human judgments.
Our annotated data set and the code for our model is available at https://github.com/yuri-bizzoni/ Metaphor-Paraphrase .2 Some of the problems raised by the concept of paraphrase in figurative language are discussed in Section 6.
It should be noted thatBollegala and Shutova (2013) employ an unsupervised approach.
As discussed above, the upper bound for our model's performance can be set at 0.9, the correlation between our single annotator's and the mean crowd sourced judgments.
AcknowledgmentsWe are grateful to our colleagues in the Centre for Linguis-
Metaphor in textual entailment. Rodrigo Agerri, COLING 2008, 22nd International Conference on Computational Linguistics, Posters Proceedings. Manchester, UKRodrigo Agerri. 2008. Metaphor in textual entailment. In COLING 2008, 22nd International Conference on Computational Linguistics, Posters Proceedings, 18-22 August 2008, Manchester, UK. pages 3- 6. http://www.aclweb.org/anthology/ C08-2001.
Semeval-2016 task 1: Semantic textual similarity, monolingual and cross-lingual evaluation. Eneko Agirre, Carmen Banea, Daniel M Cer, Mona T Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, German Rigau, Janyce Wiebe, Proceedings of the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016. the 10th International Workshop on Semantic Evaluation, SemEval@NAACL-HLT 2016San Diego, CA, USAEneko Agirre, Carmen Banea, Daniel M. Cer, Mona T. Diab, Aitor Gonzalez-Agirre, Rada Mihalcea, Ger- man Rigau, and Janyce Wiebe. 2016. Semeval- 2016 task 1: Semantic textual similarity, mono- lingual and cross-lingual evaluation. In Proceed- ings of the 10th International Workshop on Seman- tic Evaluation, SemEval@NAACL-HLT 2016, San Diego, CA, USA, June 16-17, 2016. pages 497- 511. http://aclweb.org/anthology/S/ S16/S16-1081.pdf.
Deep learning of binary and gradient judgments for semantic paraphrase. Yuri Bizzoni, Shalom Lappin, Proceedings of IWCS 2017. IWCS 2017Yuri Bizzoni and Shalom Lappin. 2017. Deep learn- ing of binary and gradient judgments for semantic paraphrase. Proceedings of IWCS 2017 .
Metaphor interpretation using paraphrases extracted from the web. Danushka Bollegala, Ekaterina Shutova, PloS one. 8974304Danushka Bollegala and Ekaterina Shutova. 2013. Metaphor interpretation using paraphrases extracted from the web. PloS one 8(9):e74304.
Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, Alan L Yuille, CoRR abs/1606.00915Liang-Chieh Chen, George Papandreou, Iasonas Kokkinos, Kevin Murphy, and Alan L. Yuille. 2016. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. CoRR abs/1606.00915. http: //arxiv.org/abs/1606.00915.
Language-independent ensemble approaches to metaphor identification. Jonathan Dunn, Jon Beitran De, Maura Heredia, Lisa Burke, Sergey Gandy, Oren Kanareykin, Matthew Kapah, Dell Taylor, Ophir Hines, David Frieder, Grossman, 28th AAAI Conference on Artificial Intelligence, AAAI 2014. AI Access FoundationJonathan Dunn, Jon Beitran De Heredia, Maura Burke, Lisa Gandy, Sergey Kanareykin, Oren Kapah, Matthew Taylor, Dell Hines, Ophir Frieder, David Grossman, et al. 2014. Language-independent en- semble approaches to metaphor identification. In 28th AAAI Conference on Artificial Intelligence, AAAI 2014. AI Access Foundation.
Multilingual affect polarity and valence prediction in metaphors. Zornitsa Kozareva, Proceedings of the 6th Workshop on Computational Approaches to Subjectivity,Sentiment and Social Media Analysis. the 6th Workshop on Computational Approaches to Subjectivity,Sentiment and Social Media AnalysisLisbon, Portugal2015Zornitsa Kozareva. 2015. Multilingual affect po- larity and valence prediction in metaphors. In Proceedings of the 6th Workshop on Computa- tional Approaches to Subjectivity,Sentiment and Social Media Analysis, WASSA@EMNLP 2015, 17 September2015, Lisbon, Portugal. page 1. http://aclweb.org/anthology/W/W15/ W15-2901.pdf.
What corpus linguistics can tell us about metaphor use in newspaper texts. Tina Krennmayr, Journalism Studies. 164Tina Krennmayr. 2015. What corpus linguistics can tell us about metaphor use in newspaper texts. Jour- nalism Studies 16(4):530-546.
Recurrent convolutional neural networks for text classification. Siwei Lai, Liheng Xu, Kang Liu, Jun Zhao, Proceedings of the Twenty-Ninth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI'15. the Twenty-Ninth AAAI Conference on Artificial Intelligence. AAAI Press, AAAI'15Siwei Lai, Liheng Xu, Kang Liu, and Jun Zhao. 2015. Recurrent convolutional neural networks for text classification. In Proceedings of the Twenty-Ninth AAAI Conference on Artificial In- telligence. AAAI Press, AAAI'15, pages 2267- 2273. http://dl.acm.org/citation. cfm?id=2886521.2886636.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems. C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. WeinbergerCurran Associates, Inc26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their composition- ality. In C. J. C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K. Q. Weinberger, editors, Ad- vances in Neural Information Processing Systems 26, Curran Associates, Inc., pages 3111-3119.
Metaphor as a medium for emotion: An empirical study. Saif Mohammad, Ekaterina Shutova, Peter D Turney, Proceedings of the Fifth Joint Conference on Lexical and Computational Semantics, *SEM@ACL 2016. the Fifth Joint Conference on Lexical and Computational Semantics, *SEM@ACL 2016Berlin, GermanySaif Mohammad, Ekaterina Shutova, and Peter D. Turney. 2016. Metaphor as a medium for emo- tion: An empirical study. In Proceedings of the Fifth Joint Conference on Lexical and Computa- tional Semantics, *SEM@ACL 2016, Berlin, Ger- many, 11-12 August 2016. http://aclweb. org/anthology/S/S16/S16-2003.pdf.
How to construct deep recurrent neural networks. Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, Yoshua Bengio, Proceedings of the Second International Conference on Learning Representations. the Second International Conference on Learning RepresentationsICLR 2014Razvan Pascanu, Caglar Gulcehre, Kyunghyun Cho, and Yoshua Bengio. 2014. How to construct deep recurrent neural networks. Proceedings of the Sec- ond International Conference on Learning Repre- sentations (ICLR 2014) .
Convolutional, long shortterm memory, fully connected deep neural networks. Tara N Sainath, Oriol Vinyals, Andrew W Senior, Hasim Sak, 10.1109/ICASSP.2015.71788382015 IEEE International Conference on Acoustics, Speech and Signal Processing. South Brisbane, Queensland, Australia2015Tara N. Sainath, Oriol Vinyals, Andrew W. Senior, and Hasim Sak. 2015. Convolutional, long short- term memory, fully connected deep neural networks. In 2015 IEEE International Conference on Acous- tics, Speech and Signal Processing, ICASSP 2015, South Brisbane, Queensland, Australia, April 19-24, 2015. pages 4580-4584. https://doi.org/ 10.1109/ICASSP.2015.7178838.
Automatic metaphor interpretation as a paraphrasing task. Ekaterina Shutova, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Stroudsburg, PA, USA, HLT '10Association for Computational LinguisticsEkaterina Shutova. 2010. Automatic metaphor in- terpretation as a paraphrasing task. In Hu- man Language Technologies: The 2010 An- nual Conference of the North American Chap- ter of the Association for Computational Linguis- tics. Association for Computational Linguistics, Stroudsburg, PA, USA, HLT '10, pages 1029- 1037. http://dl.acm.org/citation. cfm?id=1857999.1858145.
Metaphor corpus annotated for source-target domain mappings. Ekaterina Shutova, Simone Teufel, LREC. 2Ekaterina Shutova and Simone Teufel. 2010. Metaphor corpus annotated for source-target domain map- pings. In LREC. volume 2, pages 2-2.
Dynamic Pooling and Unfolding Recursive Autoencoders for Paraphrase Detection. Richard Socher, Eric H Huang, Jeffrey Pennington, Andrew Y Ng, Christopher D Manning+, Advances in Neural Information Processing Systems 24. Richard Socher, Eric H. Huang, Jeffrey Pennington, Andrew Y. Ng, and Christopher D. Manning+. 2011. Dynamic Pooling and Unfolding Recursive Autoen- coders for Paraphrase Detection. In Advances in Neural Information Processing Systems 24.
Distributional semantics beyond words: Supervised learning of analogy and paraphrase. D Peter, Turney, CoRR abs/1310.5042Peter D. Turney. 2013. Distributional semantics be- yond words: Supervised learning of analogy and paraphrase. CoRR abs/1310.5042. http:// arxiv.org/abs/1310.5042.
Metaphor: A Computational Perspective. Synthesis Lectures on Human Language Technologies. Tony Veale, Ekaterina Shutova, Beata Beigman Klebanov, Morgan & Claypool PublishersTony Veale, Ekaterina Shutova, and Beata Beigman Klebanov. 2016. Metaphor: A Computa- tional Perspective. Synthesis Lectures on Hu- man Language Technologies. Morgan & Claypool Publishers. https://doi.org/10.2200/
Tweet2vec: Learning tweet embeddings using character-level cnn-lstm encoder-decoder. Soroush Vosoughi, Prashanth Vijayaraghavan, Deb Roy, 10.1145/2911451.2914762Proceedings of the 39th International ACM SIGIR Conference on Research and Development in Information Retrieval. the 39th International ACM SIGIR Conference on Research and Development in Information RetrievalNew York, NY, USA, SIGIRACM16Soroush Vosoughi, Prashanth Vijayaraghavan, and Deb Roy. 2016. Tweet2vec: Learning tweet embeddings using character-level cnn-lstm encoder-decoder. In Proceedings of the 39th International ACM SIGIR Conference on Research and Development in In- formation Retrieval. ACM, New York, NY, USA, SIGIR '16, pages 1041-1044. https://doi. org/10.1145/2911451.2914762.
Dimensional sentiment analysis using a regional CNN-LSTM model. Jin Wang, Liang-Chih Yu, K Robert Lai, Xuejie Zhang, Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016. the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016Berlin, Germany; Short Papers2Jin Wang, Liang-Chih Yu, K. Robert Lai, and Xue- jie Zhang. 2016. Dimensional sentiment analysis using a regional CNN-LSTM model. In Proceed- ings of the 54th Annual Meeting of the Association for Computational Linguistics, ACL 2016, August 7-12, 2016, Berlin, Germany, Volume 2: Short Pa- pers. http://aclweb.org/anthology/P/ P16/P16-2037.pdf.
Semeval-2015 task 1: Paraphrase and semantic similarity in twitter (pit). Wei Xu, Chris Callison-Burch, Bill Dolan, Proceedings of the 9th International Workshop on Semantic Evaluation (Se-mEval 2015). Association for Computational Linguistics. the 9th International Workshop on Semantic Evaluation (Se-mEval 2015). Association for Computational LinguisticsDenver, ColoradoWei Xu, Chris Callison-Burch, and Bill Dolan. 2015. Semeval-2015 task 1: Paraphrase and semantic sim- ilarity in twitter (pit). In Proceedings of the 9th In- ternational Workshop on Semantic Evaluation (Se- mEval 2015). Association for Computational Lin- guistics, Denver, Colorado, pages 1-11. http:// www.aclweb.org/anthology/S15-2001. |
236,486,123 | TMEKU System for the WAT2021 Multimodal Translation Task | We introduce our TMEKU 1 system submitted to the English→Japanese Multimodal Translation Task for WAT 2021. We participated in the Flickr30kEnt-JP task and Ambiguous MSCOCO Multimodal task under the constrained condition using only the officially provided datasets. Our proposed system employs soft alignment of word-region for multimodal neural machine translation (MNMT). The experimental results evaluated on the BLEU metric provided by the WAT 2021 evaluation site show that the TMEKU system has achieved the best performance among all the participated systems. Further analysis of the case study demonstrates that leveraging wordregion alignment between the textual and visual modalities is the key to performance enhancement in our TMEKU system, which leads to better visual information use.2 https://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2021/ | [
207556454,
227905511,
6628106,
10643243,
218973973,
236486262,
3104920,
221097309,
53249630
] | TMEKU System for the WAT2021 Multimodal Translation Task
August 5-6, 2021
Yuting Zhao zhao-yuting@ed.tmu.ac.jp
Tokyo Metropolitan University
Mamoru Komachi komachi@tmu.ac.jp
Tokyo Metropolitan University
Tomoyuki Kajiwara kajiwara@cs.ehime-u.ac.jp
Ehime University
Chenhui Chu chu@i.kyoto-u.ac.jp
Kyoto University
TMEKU System for the WAT2021 Multimodal Translation Task
Proceedings of the 8th Workshop on Asian Translation
the 8th Workshop on Asian TranslationBangkok, ThailandAugust 5-6, 2021174
We introduce our TMEKU 1 system submitted to the English→Japanese Multimodal Translation Task for WAT 2021. We participated in the Flickr30kEnt-JP task and Ambiguous MSCOCO Multimodal task under the constrained condition using only the officially provided datasets. Our proposed system employs soft alignment of word-region for multimodal neural machine translation (MNMT). The experimental results evaluated on the BLEU metric provided by the WAT 2021 evaluation site show that the TMEKU system has achieved the best performance among all the participated systems. Further analysis of the case study demonstrates that leveraging wordregion alignment between the textual and visual modalities is the key to performance enhancement in our TMEKU system, which leads to better visual information use.2 https://lotus.kuee.kyoto-u.ac.jp/WAT/ WAT2021/
Introduction
Neural machine translation (NMT) (Sutskever et al., 2014;Bahdanau et al., 2015) has achieved state-of-the-art translation performance. However, there remain numerous situations where textual context alone is insufficient for correct translation, such as in the presence of ambiguous words and grammatical gender. Therefore, researchers in this field have established multimodal neural machine translation (MNMT) tasks (Specia et al., 2016;Elliott et al., 2017;Barrault et al., 2018), which translates sentences paired with images into a target language.
Due to the lack of multimodal datasets, multimodal tasks on the English→Japanese (En→Ja) language pair have not been paid attention to. Since the year 2020, as the multimodal dataset on the 1 TMEKU is the abbreviation of the combination of the Tokyo Metropolitan University, the Ehime University and the Kyoto University.
En→Ja language pair has been made publicly available, the multimodal machine translation (MMT) tasks on the En→Ja were held at the WAT 2020 (Nakazawa et al., 2020) for the first time. Some studies have started to focus on incorporating multimodal contents, particularly images, to improve the translation performance on the En→Ja task.
In this study, we apply our system (Zhao et al., 2021) for the MMT task on the En→Ja language pair, which is called TMEKU system. This system is designed to translate a source word into a target word, focusing on a relevant image region. To guide the model to translate certain words based on certain image regions, explicit alignment over source words and image regions is needed. We propose to generate soft alignment of word-region based on cosine similarity between source words and visual concepts. While encoding, textual and visual modalities are represented interactively by leveraging the word-region alignment, which is associating image regions with respective source words.
The contributions of this study are as follows:
1. Our TMEKU system outperforms baselines and achieves the first place evaluated by BLEU metric among all the submitted systems in the multimodal translation task of WAT 2021 2 (Nakazawa et al., 2021) on the En→Ja.
2. Further analysis demonstrates that our TMEKU system utilizes visual information effectively by relating the textual to visual information. Figure 1: The soft alignment of word-region.
TMEKU System
Word-Region Alignment
As shown in Figure 1, we propose to create an alignment between semantically relevant source words and image regions. For the regions, we follow in detecting object-level image regions from each image, which are denoted by bounding boxes on the figure. In particular, each bounding box is detected along with a visual concept consisting of an attribute class followed by an object class instead of only the object class. We take these visual concepts to represent the image regions. We set each image labeled with 36 visual concepts of image regions, which are space-separated phrases. For the words, we lowercase and tokenize the source English sentences via the Moses toolkit. 3 The soft alignment is a similarity matrix filled with the cosine similarity between source words and visual concepts. To avoid unknown words, we convert the words and concepts into subword units using the byte pair encoding (BPE) model (Sennrich et al., 2016). Subsequently, we utilize fastText (Bojanowski et al., 2017) to learn subword embeddings. We use a pre-trained model 4 containing two million word vectors trained with subword information on Common Crawl (600B tokens). The source subword embeddings can be generated directly, whereas the generation of visual concept embeddings should take an average of the embeddings of all constituent subwords because they are phrases. As shown in Figure 1, source subwords are represented by W = {w 1 , w 2 , w 3 , · · · , w n }, and the visual concepts are represented by C = {c 1 , c 2 , c 3 , · · · , c 36 }. These embeddings provide a mapping function from a subword to a 300-dim vector, where semantically similar subwords are embedded close to each other. Finally, we calculate a cosine similarity matrix of the word-region as a soft alignment A soft .
Encoder
Representing Textual Input
In Figure 2, the textual encoder is a bi-directional RNN. Given a source sentence of n source words, the encoder generates the forward annotation vec-
tors ( − → h 1 , − → h 2 , − → h 3 , · · · , − → h n ), and backward annota- tion vectors ( ← − h 1 , ← − h 2 , ← − h 3 , · · · , ← − h n ). By concatenat- ing the forward and backward vectors h i = [ − → h i ; ← − h i ], all words are denoted as H = (h 1 , h 2 , · · · , h n ).
Representing Visual Input
We follow in extracting the region-of-interest (RoI) features of detected image regions in each image. There are 36 object-level image region features, each of which is represented as a 2,048-dim vector r, and all features in an image are denoted as R = (r 1 , r 2 , r 3 , · · · , r 36 ).
Representations with Word-Region Alignment
As shown in Figure 2, we represent textual annotation of n source words as A txt = (a txt 1 , a txt 2 , a txt 3 , · · · , a txt n ), and visual annotation of 36 regions as A img = (a img 1 , a img 2 , a img 3 , · · · , a img 36 ). We represented the visual annotation A img by concatenating R with the aligned textual features H align and the textual annotation A txt using textual input representation H directly.
The calculation of the A img is computed as follows:
A img = CONCAT(R, H align ) H align = A T soft · H |H|
where the |R| and |H| represent the length of source words and the numbers of image regions: n and 36; the CONCAT is a concatenation operator.
Decoder
To generate target word y t at time step t, a hidden state proposal s
(1) t is computed in the first cell of deepGRU (Delbrouck and Dupont, 2018) (GRU (1)) by function f gru 1 (y t−1 , s t−1 ). The function considers the previously emitted target word y t−1 and generated hidden state s t−1 as follows.
s (1) t = (1 −ξ t ) ṡ t +ξ t s t−1 s t = tanh(W E Y [y t−1 ] +γ t (U s t−1 )) γ t = σ(W γ E Y [y t−1 ] + U γ s t−1 ) ξ t = σ(W ξ E Y [y t−1 ] + U ξ s t−1 )
where W ξ , U ξ , W γ , U γ , W , and U are training parameters, and E Y is the target word embedding.
Text-Attention
At time step t, the text-attention focuses on every textual annotation a txt i in A txt and assigns an attention weight. The textual context vector z t is generated as follows.
e text t,i = (V text ) T tanh(U text s (1) t + W text a txt i ), α text t,i = softmax(e text t,i ) z t = n i=1 α text t,i a txt i
where V text , U text , and W text are the training parameters; e text t,i is the attention energy; and α text t,i is the attention weight matrix.
Image-Attention
Similarly, the visual context vector c t is generated as follows.
e img t,j = (V img ) T tanh(U img s (1) t + W img a img j ), α img t,j = softmax(e img t,j ) c t = 36 j=1 α img t,j a img j
where V img , U img , and W img are the training parameters; α img t,j is a weight matrix of each a img j ; and e img t,j is the attention energy.
DeepGRU
As shown in Figure 2, deepGRU consists of three layers of GRU cells, which are variants of the conditional gated recurrent unit (cGRU). 5 The hidden state s t is computed in GRU (3) as follows. Because the calculation of f gru 2 and f gru 3 are similar to function f gru 1 , they are not included in the paper.
s t = f gru 3 ([c t , y t−1 ], s (2) t ) s (2) t = f gru 2 (z t , s (1) t )
We use a gated hyperbolic tangent activation instead of tanh. This nonlinear layer implements function f ght : x ∈ R m → y ∈ R n with parameters defined as follows.
y = tanh(Kx + b) g = σ(K x + b ) y = y g
where K, K ∈ R n×m and b, b ∈ R n are the training parameters.
To ensure that both representations have their own projections to compute the candidate probabilities, a textual GRU block and visual GRU block (Delbrouck and Dupont, 2018) obtained as below.
b
v t = f ght (W v b s t ) b t t = f ght (W t b s (2) t ) y t ∼ p t = softmax(W t proj b t t + W v proj b v t ), where W v b , W t b , W t proj , W v proj are training parame- ters.
Experiments
Dataset
Firstly, we conducted experiments for the En→Ja task using the official Flickr30kEnt-JP dataset (Nakayama et al., 2020), which was extended from the Flickr30k (Young et al., 2014) and Flickr30k Entities (Plummer et al., 2017) datasets, where manual Japanese translations were newly added.
For training and validation, we used the Flickr30kEnt-JP dataset 6 for Japanese sentences, the Flickr30k Entities dataset 7 for English sentences, and the Flickr30k dataset 8 for images. They were sharing the same splits of training and validation data made in Flickr30k Entities. For test data, we used the officially provided data of the Flickr30kEnt-JP task, and their corresponding images were in the Flickr30k dataset.
Note that the Japanese training data size is originally 148,915 sentences, but five sentences are missing. Thus, we used 148,910 sentences for training. In summary, we used 148,910 pairs for training, 5k pairs for validation, and 1k monolingual English sentences for translating test results.
Secondly, we also conducted experiments for the En→Ja task using the official Ambiguous MSCOCO dataset (Merritt et al., 2020), 9 which was extended from the Ambiguous COCO captions and images, 10 where the Japanese translations were newly added. It was including a validation set with 230 pairs and a test set with 231 pairs. For standard training data, the training data from the Flickr30kEnt-JP dataset was officially designated. 6 https://github.com/nlab-mpg/ Flickr30kEnt-JP 7 http://bryanplummer.com/ Flickr30kEntities/ 8 http://shannon.cs.illinois.edu/ DenotationGraph/ 9 https://github.com/knccch/JaEnCOCO 10 http://www.statmt.org/wmt17/ multimodal-task.html
Preprocessing
For English sentences, we applied lowercase, punctuation normalization, and the tokenizer in the Moses Toolkit. Then we converted space-separated tokens into subword units using the BPE model with 10k merge operations. For Japanese sentences, we used MeCab 11 for word segmentation with the IPA dictionary. The resulting vocabulary sizes of En→Ja were 9,578→22,274 tokens.
For image regions, we used Faster- RCNN (Ren et al., 2015) in to detect up to 36 salient visual objects per image and extracted their corresponding 2,048-dim image region features and attribute-object combined concepts.
Settings
(i) NMT: the baseline NMT system (Bahdanau et al., 2015) is the architecture comprised a 2-layer bidirectional GRU encoder and a 2-layer cGRU decoder with attention mechanism, which only encodes the source sentence as the input. (ii) MNMT: the baseline MNMT system without word-region alignment (Zhao et al., 2020). This architecture comprised a 2-layer bidirectional GRU encoder and a 2-layer cGRU decoder with double attentions to integrate visual and textual features. (iii) TMEKU system: our proposed MNMT system with word-region alignment.
We conducted all experiments on Nmtpy toolkit (Caglayan et al., 2017).
Parameters
We ensured that the parameters were consistent in all the settings. We set the encoder and decoder hidden state to 400-dim; word embedding to 200dim; batch size to 32; beam size to 12; text dropout to 0.3; image region dropout to 0.5; dropout of source RNN hidden states to 0.5; and blocks b t and one validation evaluation was performed after every epoch.
Ensembling Models
For the Flickr30kEnt-JP task on the En→Ja, each experiment is repeated with 12 different seeds to mitigate the variance of BLEU. At last, we choose the top 10 trained models that evaluated by BLEU scores on the validation set for ensembling. For the Ambiguous MSCOCO task on the En→Ja, each experiment is repeated with 8 different seeds to mitigate the variance of BLEU and benefit from ensembling these 8 trained models for the final testing.
Evaluation
We evaluated the quality of the translation results using the official evaluation system provided by WAT 2021. We submitted the final translation results in Japanese, which was translated from the official test data in English. On the WAT 2021 evaluation site, an automatic evaluation server was prepared and the BLEU was the main metric to evaluate our submitted translation results.
Results
In Table 1, we presented the results of the baselines and our TMEKU system on the Flickr30kEnt-JP task. We compared all the results based on BLEU scores evaluated by WAT 2021 evaluation site. For instance, the TMEKU system outperformed the NMT baseline by BLEU scores of 0.86 and outperformed the MNMT baseline by BLEU scores of 0.69 on the official test set. Our TMEKU system achieved significant improvement over both the NMT and MNMT baselines. Moreover, the result of ensembling the top 10 models has achieved the first place in the ranking of this task.
We also participated in the Ambiguous MSCOCO task on the En→Ja translation using our TMEKU system. Our reported BLEU scores are shown in Table 2, and the result of ensembling 8 models has ranked the first among all the submissions in this task.
Human Evaluation
To further validate the translation performance, a human evaluation was done by the organizers.
There are two native speakers of Japanese to rate the translation results with a score of 1 to 5 (1 is the worst and 5 is the best), who are informed to focus more on semantic meaning than grammatical correctness. There are 200 randomly selected examples for evaluation on the En→Ja language pair of Flickr30kEnt-JP task and Ambiguous MSCOCO task, respectively.
The human evaluation scores provided by the organizers are added in Table 1 and Table 2, which have achieved the best scores among the participated systems in their respective tasks.
Case Study
We show two cases in Figure 3, and improvement is highlighted in green.
We perform two types of visualization for each case: (1) We visualize the source-target word alignment of the text-attention.
(2) We visualize the region-target alignment of the image-attention at a time step that generates a certain target word along with attending to the most heavily weighted image region feature.
In the case shown on the left, our TMEKU system translates "entering" to "entrant," but the baselines under-translate. By visualization, the textattention and image-attention assign the highest weights to the word and region that are semantically relevant at that time step of generating "entrant." This example shows that translation quality improvement is due to the simultaneous attentions of semantically related image regions and words.
In the case shown on the right, our TMEKU system correctly translates "backyard" to a com-English: a man in a red shirt entering an establishment. Reference: un homme en t-shirt rouge entrant dans un établissement.
NMT Baseline: un homme en chemise rouge dans un établissement. MNMT Baseline: un homme en chemise rouge dans un établissement. TMEKU System: un homme en t-shirt rouge entrant (entering) dans un établissement.
English: a man is grilling out in his backyard. Reference: un homme fait un barbecue dans son arrière-cour.
NMT Baseline: un homme fait griller quelque chose dans sa cour (yard). MNMT Baseline: un homme fait griller quelque chose dans sa cour (yard). TMEKU System: un homme fait griller quelque chose dans sa arrière-cour (backyard). pound noun of "arrière-cour." But the baselines mistranslates it to "cour," which means "yard" in English. Through visualization, we find that the text-attention and image-attention focus on the features that are semantically relevant at that time step. This example shows that the image region feature associated with its semantically relevant textual feature can overcome the deficiency, where the object attribute cannot be specifically represented by only the image region feature.
Conclusion
We presented our TMEKU system to the English→Japanese MMT tasks for WAT 2021, which is designed to simultaneously consider relevant textual and visual features during translation. By integrating the explicit word-region alignment, the object-level regional features can be further specified with respective source textual features. This leads the two attention mechanisms to understand the semantic relationships between textual objects and visual concepts.
Experimental results show that our TMEKU system exceeded baselines by a large margin and achieved the best performance among all the participated systems. We also performed analysis of case study to demonstrate the specific improvements resulting from related modalities.
In the future, we plan to propose a more efficient integration method to make modalities interactive with each other.
Figure 2 :
2The TMEKU system.
Figure 3 :
3Examples for case study. The improved translation is highlighted in green.
Table 2 :
2Ambiguous MSCOCO task: BLEU scores and
human evaluation score (full score is 5) on the En→Ja.
https://github.com/moses-smt/ mosesdecoder 4 https://fasttext.cc/docs/en/ english-vectors.html
https://github.com/nyu-dl/ dl4mt-tutorial/blob/master/docs/cgru.pdf
t and b v t to 0.5. Specifically, the textual annotation A txt was 800dim, which was consistent with H. Further, the visual annotation A img was 4,096-dim by a concatenation of R and H align , where R was 2,048-dim and H align was 2,048-dim by a linear transformation from 800-dim. We trained the model using stochastic gradient descent with ADAM (Kingma and Ba, 2015) and a learning rate of 0.0004. We stopped training when the BLEU(Papineni et al., 2002) score did not improve for 20 evaluations on the validation set, 11 https://taku910.github.io/mecab/
AcknowledgmentsThis work was supported by Grant-in-Aid for Young Scientists #19K20343, JSPS. We are immensely grateful to Mr. Tosho Hirasawa, who provided coding support, and comments that significantly help the implementation of our system.
Bottom-up and top-down attention for image captioning and visual question answering. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, Lei Zhang, CVPR. Peter Anderson, Xiaodong He, Chris Buehler, Damien Teney, Mark Johnson, Stephen Gould, and Lei Zhang. 2018. Bottom-up and top-down attention for image captioning and visual question answering. In CVPR, pages 6077-6086.
Neural machine translation by jointly learning to align and translate. Dzmitry Bahdanau, Kyunghyun Cho, Yoshua Bengio, abs/1409.0473Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural machine translation by jointly learning to align and translate. ICLR, abs/1409.0473.
Findings of the third shared task on multimodal machine translation. Loïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, Stella Frank, 10.18653/v1/w18-6402WMT. Loïc Barrault, Fethi Bougares, Lucia Specia, Chiraag Lala, Desmond Elliott, and Stella Frank. 2018. Find- ings of the third shared task on multimodal machine translation. In WMT, pages 304-323.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, TACL. 5Piotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2017. Enriching word vectors with subword information. TACL, 5:135-146.
NMTPY: A flexible toolkit for advanced neural machine translation systems. Ozan Caglayan, Mercedes García-Martínez, Adrien Bardet, Prague Bull. Math. Linguistics. 109Walid Aransa, Fethi Bougares, and Loïc BarraultOzan Caglayan, Mercedes García-Martínez, Adrien Bardet, Walid Aransa, Fethi Bougares, and Loïc Bar- rault. 2017. NMTPY: A flexible toolkit for advanced neural machine translation systems. Prague Bull. Math. Linguistics, 109:15-28.
UMONS submission for WMT18 multimodal translation task. Jean-Benoit Delbrouck, Stéphane Dupont, WMT. Jean-Benoit Delbrouck and Stéphane Dupont. 2018. UMONS submission for WMT18 multimodal trans- lation task. In WMT, pages 643-647.
Findings of the second shared task on multimodal machine translation and multilingual image description. Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, Lucia Specia, 10.18653/v1/w17-4718WMT. Desmond Elliott, Stella Frank, Loïc Barrault, Fethi Bougares, and Lucia Specia. 2017. Findings of the second shared task on multimodal machine transla- tion and multilingual image description. In WMT, pages 215-233.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ICLR. Diederik P. Kingma and Jimmy Ba. 2015. Adam: A method for stochastic optimization. In ICLR, pages 1-15.
A corpus for english-japanese multimodal neural machine translation with comparable sentences. Andrew Merritt, Chenhui Chu, Yuki Arase, abs/2010.08725CoRRAndrew Merritt, Chenhui Chu, and Yuki Arase. 2020. A corpus for english-japanese multimodal neu- ral machine translation with comparable sentences. CoRR, abs/2010.08725.
A visually-grounded parallel corpus with phrase-to-region linking. Hideki Nakayama, Akihiro Tamura, Takashi Ninomiya, LREC. Hideki Nakayama, Akihiro Tamura, and Takashi Ni- nomiya. 2020. A visually-grounded parallel cor- pus with phrase-to-region linking. In LREC, pages 4204-4210.
Overview of the 8th workshop on Asian translation. Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukuttan, Shantipriya Parida, Ondřej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, Sadao Oda, Yusuke Kurohashi, WAT. Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Shantipriya Parida, Ondřej Bojar, Chenhui Chu, Akiko Eriguchi, Kaori Abe, and Sadao Oda, Yusuke Kurohashi. 2021. Overview of the 8th work- shop on Asian translation. In WAT.
Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Shantipriya Parida, Ondřej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian translation. In WAT. Toshiaki Nakazawa, Hideki Nakayama, Chenchen Ding, Raj Dabre, Shohei Higashiyama, Hideya Mino, Isao Goto, Win Pa Pa, Anoop Kunchukut- tan, Shantipriya Parida, Ondřej Bojar, and Sadao Kurohashi. 2020. Overview of the 7th workshop on Asian translation. In WAT, pages 1-44.
BLEU: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, ACL. Kishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. BLEU: a method for automatic eval- uation of machine translation. In ACL, pages 311- 318.
Flickr30k entities: Collecting region-to-phrase correspondences for richer image-to-sentence models. A Bryan, Liwei Plummer, Christopher M Wang, Juan C Cervantes, Julia Caicedo, Svetlana Hockenmaier, Lazebnik, 10.1007/s11263-016-0965-7IJCV. Bryan A. Plummer, Liwei Wang, Christopher M. Cer- vantes, Juan C. Caicedo, Julia Hockenmaier, and Svetlana Lazebnik. 2017. Flickr30k entities: Col- lecting region-to-phrase correspondences for richer image-to-sentence models. IJCV, pages 74-93.
Faster R-CNN: Towards real-time object detection with region proposal networks. Kaiming Shaoqing Ren, Ross He, Jian Girshick, Sun, NIPS. Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. 2015. Faster R-CNN: Towards real-time object detection with region proposal networks. In NIPS, pages 91-99.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, ACL. Rico Sennrich, Barry Haddow, and Alexandra Birch. 2016. Neural machine translation of rare words with subword units. In ACL, pages 1715-1725.
A shared task on multimodal machine translation and crosslingual image description. Lucia Specia, Stella Frank, Khalil Sima'an, Desmond Elliott, 10.18653/v1/w16-2346WMT. Lucia Specia, Stella Frank, Khalil Sima'an, and Desmond Elliott. 2016. A shared task on multi- modal machine translation and crosslingual image description. In WMT, pages 543-553.
Sequence to sequence learning with neural networks. Ilya Sutskever, Oriol Vinyals, Quoc V Le, NIPS. Ilya Sutskever, Oriol Vinyals, and Quoc V Le. 2014. Sequence to sequence learning with neural networks. In NIPS, pages 3104-3112.
TMU Japanese-English multimodal machine translation system for WAT 2020. Hiroto Tamura, Tosho Hirasawa, Masahiro Kaneko, Mamoru Komachi, WAT. Hiroto Tamura, Tosho Hirasawa, Masahiro Kaneko, and Mamoru Komachi. 2020. TMU Japanese- English multimodal machine translation system for WAT 2020. In WAT, pages 80-91.
Tips and tricks for visual question answering: Learnings from the 2017 challenge. D Teney, P Anderson, X He, A V D Hengel, CVPR. D. Teney, P. Anderson, X. He, and A. v. d. Hengel. 2018. Tips and tricks for visual question answering: Learnings from the 2017 challenge. In CVPR, pages 4223-4232.
From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions. Peter Young, Alice Lai, TACL. 2Micah Hodosh, and Julia HockenmaierPeter Young, Alice Lai, Micah Hodosh, and Julia Hock- enmaier. 2014. From image descriptions to visual denotations: New similarity metrics for semantic in- ference over event descriptions. TACL, 2:67-78.
Double attention-based multimodal neural machine translation with semantic image regions. Yuting Zhao, Mamoru Komachi, Tomoyuki Kajiwara, Chenhui Chu, EAMT. Yuting Zhao, Mamoru Komachi, Tomoyuki Kajiwara, and Chenhui Chu. 2020. Double attention-based multimodal neural machine translation with seman- tic image regions. In EAMT, pages 105-114.
Neural machine translation with semantically relevant image regions. Yuting Zhao, Mamoru Komachi, Tomoyuki Kajiwara, Chenhui Chu, NLP. Yuting Zhao, Mamoru Komachi, Tomoyuki Kajiwara, and Chenhui Chu. 2021. Neural machine translation with semantically relevant image regions. In NLP. |
384,103 | A Wikipedia-LDA Model for Entity Linking with Batch Size Changing Instance Selection | Entity linking maps name mentions in context to entries in a knowledge base through resolving the name variations and ambiguities. In this paper, we propose two advancements for entity linking. First, a Wikipedia-LDA method is proposed to model the contexts as the probability distributions over Wikipedia categories, which allows the context similarity being measured in a semantic space instead of literal term space used by other studies for the disambiguation. Furthermore, to automate the training instance annotation without compromising the accuracy, an instance selection strategy is proposed to select an informative, representative and diverse subset from an auto-generated dataset. During the iterative selection process, the batch sizes at each iteration change according to the variance of classifier's confidence or accuracy between batches in sequence, which not only makes the selection insensitive to the initial batch size, but also leads to a better performance. The above two advancements give significant improvements to entity linking individually. Collectively they lead the highest performance on KBP-10 task. Being a generic approach, the batch size changing method can also benefit active learning for other tasks. | [
3021306,
8622546,
526503
] | A Wikipedia-LDA Model for Entity Linking with Batch Size Changing Instance Selection
AFNLPCopyright AFNLPNovember 8 -13, 2011. 2011
Wei Zhang
School of Computing ‡ Institute for Infocomm Research
National University of Singapore
Jian Su sujian@i2r.a-star.edu.sg
School of Computing ‡ Institute for Infocomm Research
National University of Singapore
‡ Chew
School of Computing ‡ Institute for Infocomm Research
National University of Singapore
Lim Tan tancl@comp.nus.edu.sg
School of Computing ‡ Institute for Infocomm Research
National University of Singapore
A Wikipedia-LDA Model for Entity Linking with Batch Size Changing Instance Selection
Proceedings of the 5th International Joint Conference on Natural Language Processing
the 5th International Joint Conference on Natural Language ProcessingChiang Mai, ThailandAFNLPNovember 8 -13, 2011. 2011
Entity linking maps name mentions in context to entries in a knowledge base through resolving the name variations and ambiguities. In this paper, we propose two advancements for entity linking. First, a Wikipedia-LDA method is proposed to model the contexts as the probability distributions over Wikipedia categories, which allows the context similarity being measured in a semantic space instead of literal term space used by other studies for the disambiguation. Furthermore, to automate the training instance annotation without compromising the accuracy, an instance selection strategy is proposed to select an informative, representative and diverse subset from an auto-generated dataset. During the iterative selection process, the batch sizes at each iteration change according to the variance of classifier's confidence or accuracy between batches in sequence, which not only makes the selection insensitive to the initial batch size, but also leads to a better performance. The above two advancements give significant improvements to entity linking individually. Collectively they lead the highest performance on KBP-10 task. Being a generic approach, the batch size changing method can also benefit active learning for other tasks.
Introduction
Knowledge base population (KBP) 1 involves gathering information scattered among the documents of a large collection to populate a knowledge base (KB) (e.g. Wikipedia). This requires either linking entity mentions in the documents with entries in the KB or highlighting these mentions as new entries to current KB.
Entity linking (McNamee and Dang, 2009) involves both finding name variants (e.g. both "George H. W. Bush" and "George Bush Senior" refer to the 41 st U.S. president) and name disambiguation (e.g. given "George Bush" and 1 http://nlp.cs.qc.cuny.edu/kbp/2010/ its context, we should be able to disambiguate which president it is referring to).
Compared with Cross-Document Coreference (Bagga and Baldwin, 1998) which clusters the articles according to the entity mentioned, entity linking has a given entity list (i.e. the reference KB) to which we disambiguate the entity mentions. Moreover, in the articles, there are new entities not present in KB.
For name disambiguation in entity linking, there has been much previous work which demonstrates modeling context is an important part of measuring document similarity. However, the traditional approach for entity linking treats the context as a bag of words, n-grams, noun phrases or/and co-occurring named entities, and measures context similarity by the comparison of the weighted literal term vectors (Varma et al., 2009;Li et al., 2009;Zhang et al., 2010;Zheng et al., 2010;Dredze et al., 2010). Such literal matching suffers from sparseness issue. For example, consider the following four observations of Michael Jordan without term match: 1) Michael Jordan is a leading researcher in machine learning and artificial intelligence.
2) Michael Jordan is currently a full professor at the University of California, Berkeley. 3) Michael Jordan (born February, 1963) is a former American professional basketball player.
4) Michael Jordan wins NBA MVP of 91-92 season.
To measure the similarity of these contexts, the semantic knowledge underlying the words is needed.
Furthermore, current state-of-the-art entity linking systems (Dredze et al., 2010;Zheng et al., 2010) are based on supervised learning approach requiring lots of annotated training instances to achieve good performance. However, entity linking annotation is highly dependent on the KB. When a new KB comes, the annotating process needs to be repeated. We have tried to automate this annotating process (Zhang et al. 2010). However, as discussed in that paper, the distribution of the auto-generated data is not con-sistent with the real dataset, because only some types of instances can be generated.
In this paper, we propose two approaches: (1) a Wikipedia-LDA model to effectively mine the semantic knowledge from the contexts of the mentions. Such topic model allows us to measure the similarity between articles and KB entries in the semantic space of Wikipedia category. (2) An instance selection strategy to effectively utilize the auto-generated annotation through an iterative process of selecting a representative, informative and diverse batch of instances at each iteration. The batch sizes at each iteration change according to the variance of classifier's confidence or accuracy between batches in sequence, which makes selection insensitive to the initial batch size and performs better than fixed size.
We conduct evaluation on KBP-10 data (Ji et al., 2010). Experiments show that the Wikipedia-LDA model is able to effectively capture the underlying semantic information and produce statistically significant improvement over literal matching alone. Correspondingly, instance selection can make the dataset more balanced and it also produces a significant gain in entity linking performance. Collectively, the two advancements lead the highest performance on KBP-10 task. Being a generic approach, the batch size changing method proposed in this paper can also benefit active learning for other tasks.
The remainder of this paper is organized as follows. Section 2 introduces the framework for entity linking. We present our Wikipedia-LDA model in Section 3, and the instance selection in Section 4. Section 5 shows the experiments and discussions. Section 6 concludes our work.
Entity Linking Framework
Entity linking is done through two steps: name variation resolution and name disambiguation. Name variation resolution finds variants for each entry in KB and then generates the possible KB candidates for the given name mention by string matching. Name disambiguation is to map a mention to the correct entry in the candidate set.
Name Variation Resolution
Wikipedia contains many name variants of entities like confusable names, spelling variations, nick names, etc. We extract the name variants of an entry in KB by leveraging the knowledge sources in Wikipedia: "titles of entity pages", "disambiguation pages" 2 , "redirect pages" 3 2.2 Name Disambiguation and "anchor texts". With the acquired name variants for entries in KB, the possible KB candidates for a given name mention can be retrieved by string matching. If the given mention is an acronym, we will expand it from the given article, and then use entity linking process.
First, using a learning to rank method, we rank all the retrieved KB candidates to identify the most likely candidate. In this learning to rank method, each name mention and the associated candidates are formed by a list of feature vectors. During linking, the score for each candidate entry is given by the ranker. The learning algorithm we used is ranking SVM (Herbrich et al., 2000).
Next, the preferred KB candidate is presented to a binary classifier (Vapnik, 1995) to determine if it is believed as the target entry for a name mention. From here, we can decide whether the mention and top candidate are linked. If not, the mention has no corresponding entry in KB (NIL). The base features adopted for both learning to rank and classification include 15 feature groups divided to 3 categories. A summary of the features is listed in Table 1. Due to the space limit, we only show the feature name, leaving out the feature details which can be found in (Dredze et al., 2010;Zheng et al., 2010).
Categories
Feature Names
Wikipedia-LDA Model
In the similar task cross-document coreference (Han and Zhao 2009) and other tasks (e.g. text classification) (Wang and Domeniconi, 2008), Wikipedia concepts are used to model the text. Wikipedia concept is a kind of entity-level topic.
In our approach, we use the cross-entity topic Wikipedia Categories to represent the semantic knowledge.
Thus, we model the contexts as the distributions over Wikipedia categories. Then, the similarity between the contexts can be measured in a semantically meaningful space. Finally, such semantic similarity, together with other base features, is incorporated in the trainable models to learn the ranker and classifier.
Modeling the Contexts as Distributions over Wikipedia Categories
Wikipedia requires contributors to assign categories to each article, which are defined as "major topics that are likely to be useful to someone reading the article". Thus, Wikipedia can serve as a document collection with multiple topical labels, where we can learn the posterior distribution over words for each topical label (i.e. Wikipedia category). Then, from the observed words in the mention's context and KB entry, we can estimate the distribution of the contexts over the Wikipedia categories. To obtain this distribution, we use a supervised Latent Dirichlet Allocation (LDA) model -labeled LDA defined by Ramage et al. (2009), which represents state-of-the-art method for multi-labeled text classification. It performs better on collections with more semantically diverse labels, which we need in order to leverage on the large semantically diverse categories from Wikipedia as the topical labels. Figure 1 shows us a graphical representation of the labeled LDA for the multi-labeled document collection. Labeled LDA is a three level hierarchical Bayesian model. β is the multinomial distribution over words for a Wikipedia category, which has a Dirichlet prior with hyperparameter η. Both the category set Λ as well as the topic prior α influence the topic mixture θ. These distributions can be used to generate documents in the form of a collection of words (w). D is the number of documents, N is the document length and K is the number of categories.
After the model is trained by Wikipedia data, the distributions of KB entry and the article over K categories are estimated by calculating the topic proportions θ. θ is given by an EM procedure that treats θ as a parameter with Z missing.
Context Similarity
We have mapped the contexts to a Kdimensional semantic space. Thus, we can calculate the context similarity by their distance in this space. To measure the context similarity in the K-dimensional topical space, we calculate the Cosine value as below:
, = ∑ , × , =1 �∑ ( , ) 2 =1 ×�∑ ( , ) 2 =1
(1)
Where d means the document with the name mention and e means the KB entry.
Such semantic similarity can be further combined with other term matching features for SVM ranker and classifier of entity linking.
Wikipedia Category Selection
Each article in Wikipedia is assigned several categories by the contributors as requested. However, from our observation some categories in Wikipedia may not be suitable to model the topics of a document. Thus, we shall consider selecting an appropriate subset from the Wikipedia categories to effectively model the contexts. We examined five possible category subsets: All, Alladmin, isa_all, isa_class, and isa_instance.
Wikipedia contains 165,744 categories. This is the set All.
There are some meta-categories used for encyclopedia management in Wikipedia, e.g. "Wikipedia editing guidelines", which are unsuitable to describe the topics of a document. Thus, we remove the categories which contain any of the following strings: wikipedia, wikiprojects, lists, mediawiki, template, user, portal, categories, articles and pages. This leaves 127,325 categories (All-admin).
However, some categories such as "people by status" and "Geography by place" in the Alladmin set cannot serve as the topics of a document properly. Thus, we need to remove them from the category set. From our observation, the topical categories are usually in is-a relation. For example, the relation between the two topical categories "Olympic basketball players" and "Olympic competitors" is an is-a relation, while the categories to be removed "people by status" and "Geography by place" are not in any is-a relation. We thus only select the categories connected by is-a relation to isa_all subset.
Since the categories are connected by unlabeled links in Wikipedia, we need to identify is-a relation links. We use the four methods as below proposed by Ponzetto and Strube (2007) to distinguish is-a and not-is-a relation links.
We first use a syntax-based method: assign isa to the link between two categories if they share the same lexical head lemma (e.g. "British Computer Scientists" and "Computer Scientists").
Then, we use structural information from the category network: (1) for a category c, look for a Wikipedia article P with the same name. Take all P's categories whose lexical heads are plural nouns CP ={cp 1 , cp 2 , …, cp n }. Take all supercategories of c, SC={sc 1 , sc 2 , …,sc k }. If the head lemma of one of cp i matches the head lemma of sc j , label the relation between c and sc j as is-a.
(2) assign is-a label to the link between two categories if a Wikipedia article is redundantly categorized under both of them. For example, "Internet" is categorized under both "Computer networks" and "Computing" and there is a link between "Computer networks" and "Computing". Then this link is assigned is-a.
Next, we use lexical-syntactic patterns in a corpus. This method uses two sets of patterns. One set is used to identify is-a relations (Caraballo, 1999;Hearst, 1992), for example "such NP 1 as NP 2 ", NP 1 and NP 2 are the values of categories and their subcategories respectively. The second set is used to identify not-is-a relations. For example "NP 1 has NP 2 ", where the link between NP 1 and NP 2 will be assigned not-is-a. These patterns are used with a corpus built from Wikipedia articles, and separately with the Tipster corpus (Harman and Liberman, 1993). The label is assigned by majority voting between the frequency counts for the two types of patterns.
Finally, we assign is-a labels to links based on transitive closures -all categories along an is-a chain are connected to each other by is-a links.
Another fact is that the categories defined by Wikipedia are not all classes. For example, "Microsoft" is an instance of the class "Computer and Video Game Companies", and it appears both as an article page and as a category in Wikipedia. We would like to further examine the two different subsets: isa_class, and isa_instance in isa_all set for entity linking. To distinguish instance and class in isa_all set, we use a structure-based method (Zirn et al., 2008). The categories which have other subcategories or Wikipedia articles connected to them by is-a relation are assigned class label. In our problem, the remaining categories are approximately regarded as instances.
Instance Selection Strategy
In this section, we explore a method to effectively utilize a large-scale auto-generated data for entity linking.
In our pervious work (Zhang et al. 2010), we proposed automatically gathering large-scale training instances for entity linking. The basic idea is to take a document with an unambiguous mention referring to an entity e1 in KB and replace it with its variation which may refer to e1, e2 or others. For example, a mention "Abbott Laboratories" in a document only refers to one KB entry "Abbott Laboratories". "Abbott Laboratories" in the document is replaced with its ambiguous synonyms, including "Abbott" "ABT", etc. Following this approach, from the 1.7 million documents in KBP-10 text collection, we generate 45,000 instances.
However, the distribution of the autogenerated data is not consistent with the real dataset, since the data generation process can only create some types of training instances. In the case of "Abbott Laboratories", more than ten "Abbott" mentions are linked to "Abbott Laboratories" entry in KB, but no "Abbott" example is linked to other entries like "Bud Abbott" "Abbott Texas", etc. Thus, we need an instance selection approach to reduce the effect of this distribution problem. However, the traditional instance selection approaches (Brighton and Mellish, 2002;Liu and Motoda, 2002) only can solve two problems: 1) a large dataset causes response-time to become slow 2) the noisy instances affect accuracy, which are different from our needs here. We thus propose an instance selection approach to select a more balanced subset from the autoannotated instances. This instance selection strategy is similar to active learning (Shen et al., 2004;Brinker, 2003) for reducing the manual annotation effort on training instances through proposing only the useful candidates to annotators. As we already have a large set of autogenerated training instances, the selection here is a fully automatic process to get a useful and more balanced subset instead.
We use the SVM classifier mentioned in Section 2.2 to select the instances from the large dataset. The initial classifier can be trained on a set of initial training instances, which can be a small part of the whole auto-generated data, or the limited manual annotated training instances available, e.g. those provided by KBP-10.
Our instance selection method is an iterative process. We select an informative, representative and diverse batch of instances based on current hyperplane and add them to the current training instance set at each iteration to further adjust the hyperplane for more accurate classification.
We use the distance as the measure to select informative instances. The distance of an instance's feature vector to the hyperplane is computed as follows:
( ) = �∑ ( , ) + =1 �(2)
Where w is the feature vector of the instance, , and correspond to the weight, class and feature vector of the i th support vector respectively. N is the number of the support vectors.
Next, we quantify the representativeness of an instance by its density. Such density is defined as the average similarity between this instance and all other instances in the dataset. If an instance has the largest density among all the instances in the dataset, it can be regarded as the centroid of this set and also the most representative instance.
( ) = ∑ � , � ≠ −1(3)
Where w is the instance in the dataset and N is the size of dataset. Sim is cosine similarity.
We combine the informativeness and representativeness by the function (1 − ( ) ) + (1 − ) ( ), in which Dist and Density are normalized first. The individual importance of each part in this function is adjusted by a tradeoff parameter (set to 0.5 in our experiment). The instance with the maximum value of this function will be selected first to the batch. This instance will be compared individually with the selected instances in current batch to make sure their similarity is less than a threshold . This is to diversify the training instance in the batch to maximize the contribution of each instance. We set to the average similarity between the instances in the original dataset. When a batch of α instances is selected, we add them to the training instance set and retrain the classifier.
Such a batch learning process will stop at the peak confidence of the SVM classifier, since Vlachos (2008) shows that the confidence of the SVM classifier is consistent with its performance. The confidence can be estimated as the sum of the distances to hyperplane for the instances of an un-annotated development set. The development set guides the selection process to solve the distribution problem mentioned above. Alternatively, we can also leverage on some annotated development data and use accuracy in-stead to guide the selection process. We explore both approaches for different application scenarios in our experiments.
We now need to decide how to set the batch size α at each iteration. It is straightforward to set a fixed batch size α (Fixed Number), which never changes during the process. However, there are some limitations as demonstrated in our experiment in this paper. First, the performance is sensitive to the batch size. Second, if we set the batch size too big, it will impede further improvement allowed by small batch size. But if we set the batch size too small from the beginning, it will dramatically increase the number of iterations needed which will make the selection too slow. To resolve the above issues, we change the batch size according to the variance of classifier's confidence on an un-annotated set. Thus, we assign an integer to 1 and 2 in the first two iterations, and ( > 2) in the i th iteration is computed as below (Flexible Number):
= −1 * ( −1 − −2 ) −2 − −3 (4)
where is the confidence of the classifier on the un-annotated dataset at i th iteration. Figure 2 summarizes the selection procedure.
Experiments and Discussions
Experimental Setup
In our study, we use KBP-10 knowledge base and document collection to evaluate our approach for entity linking. The KB is autogenerated from Wikipedia. The KB contains Loop until Batch Set is full For pre-processing, we perform sentence boundary detection derived from Stanford parser (Klein and Manning, 2003), named entity recognition using a SVM based system trained and tested on ACE 2005 with 92.5(P) 84.3(R) 88.2(F), and co-reference resolution using a SVM based resolver trained and tested on ACE 2005 with 79.5%(P), 66.7%(R) and 72.5%(F). In our implementation, we use the binary SVM Light developed by Joachims (1999) and SVM Rank developed by Joachims (2006). The classifier and ranker are trained with default parameters. The Stanford Topic Model Toolbox . The version we use is released on Oct. 08, 2008. 6 We adopt micro-averaged accuracy used in KBP-10 to evaluate our Entity Linker, i.e. the number of correct links divided by the total number of mentions.
• Select A i with the maximal value P from A = �1 − ( )� + (1 − ) ( ) • RepeatFlag=false; • Loop for each A k in Batch Set If Sim(A i ,A k ) > Then RepeatFlag=true Stop the Loop • If RepeatFlag==false Then Add A i to Batch Set • Remove A i from A = ∪ ℎ
is used for Labeled-LDA with default learning parameters. Table 2 lists the performance of entity linking with overall accuracy (ALL) as well as accuracy on subsets (Nil, Non-Nil, ORG,GPE and PER) of the data. In the first row, only base features described in Section 2.2 are used. This baseline system models the contexts with literal terms.
System with Wikipedia-LDA
The second to sixth rows report the results combining base features with semantic knowledge (i.e. the context similarity is computed by the five different subsets of Wikipedia categories mentioned in Section 3.3). We see that all the five systems with semantic features perform better than the baseline system, which models the context similarity as literal term matching. Especially, the isa_all and isa_class can achieve significantly better result than the baseline ( < 0.05, 2 test). These results prove that the semantic knowledge underlying the contexts has good disambiguation power for entity linking. Table 3 tells the reason of the improvements. Table 3 shows us four sample Wikipedia categories and top 15 highly probable words identified by the topic model for these categories. The topic model successfully assigns a high probability to the words "researcher" and "professor" in the category "Members of the National Academy of Sciences", and assign a high probability to the words "nba" "basketball" "professional" and "season" in the category "American basketball players". Such semantic Table 4: Results of Entity Linking for Instance Selection knowledge learned from Wikipedia data is helpful in the example of "Michael Jordan" mentioned in Section 1. This shows that entity linking can benefit from the semantic information underlying the words and overcome the shortcomings of literal matching. We further compare the performances of the five different category subsets. From the last five rows of Table 2, we can see that isa_all subset performs best among the five subsets for disambiguation. This should be because isa_all includes more categories than isa_class and isa_instance, and thus can capture more semantic information. However, although All and Alladmin include even more categories, they introduce many categories which are unsuitable to model the topics of a news article or blog text, such as the two categories mentioned in Section 3.3, "people by status" which is not in an is-a relation and "Wikipedia editing guidelines" which is used for encyclopedia management. Table 4 shows the results for evaluating our instance selection strategy. These experiments use the base features (Section 2.2).
System with Instance Selection
With and Without Manual Annotated
Data We want to find out the effectiveness of our instance selection strategy if no manually annotated data is available. In the first block of Table 4, we compare the performances of the systems with and without instance selection. "Auto_Gen" uses the auto-generated dataset described at the beginning of Section 4 as the training set directly, and "Auto_Gen+IS" applies our instance selection to the auto-generated data for training. In the instance selection process, we use the KB entries with more than 15 linked documents in the auto-generated data as our Initial Training Set (1,800 instances) to train a classifier, and then use this classifier to select instance from the auto-generated dataset. The first block of Table 4 shows that our instance selection gives significant improvements ( < 0.05, 2 test ). These improvements show our selection strategy makes the training set more balanced and it can effectively reduce the effect of distribution problem in the large scale dataset.
We further evaluate our instance selection strategy when a large manually annotated data is available in the second block of Table 4. "KBP" is trained on the manually annotated KBP-10 training set. "KBP+Auto_Gen" is trained on KBP-10 set and the auto-generated set. "KBP+Auto_Gen+IS" uses KBP-10 training set as the Initial Training Set, and applies instance selection process to the auto-generated data. Comparing "KBP+ Auto_Gen" with "KBP", we can see that the unbalanced distribution caused serious problem which even pull down the performance achieved by the large manual annotation alone. The experiment results of "KBP" and "KBP+Auto_Gen+IS" show that our instance selection strategy appears very necessary to bring further improvements over the large manually annotated dataset (5,404 instances). These significant ( < 0.05, 2 test) improvements are achieved by incorporating more training instances in a reasonable way.
Comparing the performance of "Au-to_Gen+IS" with "KBP" in Table 4, we can find that our method performs better without hard intensive work on annotating 5,404 articles. This proves that using our instance selection can save labor without compromise of entity linking accuracy. The pretty much same performance of "Au-to_Gen+IS" with "KBP +Auto_Gen+IS" also confirms the above conclusion.
Fixed Size Vs. Changing Size
We are also interested in the effectiveness of the two schemes (i.e. Fixed Number and Flexible Number) of setting the batch size α mentioned in Section 4. In Figure 3, we set the batch size α in Fixed Number scheme and α 1 α 2 in Flexible Number scheme, to different numbers from 50 to 140 increasing 10 each time. We conduct instance selection to the auto-generated data. Figure 3 shows that flexible batch size outperforms the fixed size for entity linking. Especially, the improvement at α=50, 60 and 70 is significant ( < 0.05, 2 test). This proves that batch size should be in line with the variance of the classifier's confidence at each iteration of instance selection. Furthermore, in this Figure, the performance of flexible batch size is more stable than the Fixed Number scheme. This shows that Flexible Number scheme makes the entity linking system insensitive to the initial batch size during instance selection process. Thus the initial batch size of the experiments in Table 4 is set to 80, which we believe that very similar performance can be achieved even with a different initial size. Another fact is that the selection process is similar to active learning, which needs to manually annotate the selected instances in each batch. Thus, being a generic approach, the batch size changing method proposed in this paper can also benefit active learning for other tasks.
(Un-)Annotated Development Set
In the above study, we directly use the test set without annotations as the development set for instance selection to optimize our solution to the application data. Such an approach will be useful when the application set is available in advance as in the case with KBP benchmarks.
Figure 4: Annotated Development data
When the application set is unavailable beforehand, in other words, the articles to be linked only arrive one after the other in linking stage, we leverage on the accuracy on annotated development set for the instance selection. Figure 4 shows the performances on different sizes of annotated development set. The results show that the different sizes contribute more or less same performances. We only need to use a small amount of annotated development data, 500 articles in our study to guide the instance selection to achieve similar performance as with unannotated test set being development data.
Overall Result Combining Two Approaches
We also evaluate our model which combines the Wikipedia-LDA and Instance Selection together on KBP-10 data, and compare our method with the top 7 systems in KBP-10 shared task (Ji et al., 2010). As shown in Figure 5, the first column is the performance of our system for entity linking, which outperforms the best solution 7 in KBP-10 shared task.
Conclusion
In our paper, we explored using two innovative approaches for entity linking. We proposed a Wikipedia-LDA to entity linking, which can discover the semantic knowledge underlying the contexts. We also investigated the effectiveness of five subsets of Wikipedia categories to model the contexts. Furthermore, we proposed a batch size changing instance selection strategy to reduce the effect of distribution problem in the auto-generated data. It makes entity linking system achieve state-of-the-art performance without hard labor. Meanwhile, the flexible batch size not only makes the selection insensitive to the initial batch size, but also leads to a better performance than the fixed batch size. The above two advancements significantly improve entity linking system individually, and collectively they lead the highest performance on KBP-10 task.
Figure 1 :
1Graphical model of Labeled LDA
Figure 2 :
2Instance Selection Strategy
Figure 3 :
3Performance Curves for Two Batch Size Schemes
Figure 5 :
5A Comparison with KBP-10 Systems
Table 1 :
1Base Feature Set
Table 3 :
3Sample Wikipedia Categories and Cor-
responding Top 15 Words
http://en.wikipedia.org/wiki/Wikipedia:Disambiguation 3 http://en.wikipedia.org/wiki/Wikipedia:Redirect
http://en.wikipedia.org/wiki/Template:Infobox 5 http://download.wikipedia.org 6 http://nlp.stanford.edu/software/tmt/tmt-0.3/
Another system submission shows 86.8%. However, it accesses web which is not allowed in KBP benchmark as the purpose to develop a standalone system, which is our focus here as well.
AcknowledgmentThis work is partially supported by Microsoft Research Asia eHealth Theme Program.
Entity-Based Cross-Document Coreferencing Using the Vector Space Model. A Bagga, B Baldwin, 36th Annual Meeting of the Association of Computational Linguistics. A. Bagga and B. Baldwin. 1998. Entity-Based Cross- Document Coreferencing Using the Vector Space Model. 36th Annual Meeting of the Association of Computational Linguistics. 1998.
Advances in Instance Selection for Instance-Based Learning Algorithms. H Brighton, C Mellish, Data Mining and Knowledge Discovery. H. Brighton and C. Mellish. 2002. Advances in In- stance Selection for Instance-Based Learning Algo- rithms. Data Mining and Knowledge Discovery.
Incorporating Diversity in Active Learning with Support Vector Machines. K Brinker, Proceeding of ICML. eeding of ICMLK. Brinker. 2003. Incorporating Diversity in Active Learning with Support Vector Machines. In Pro- ceeding of ICML. 2003.
Automatic construction of a hypernym-labeled noun hierarchy from text. S A Caraballo, Proceedings of the 37th Annual Meeting of the Association for Computational Linguistics. the 37th Annual Meeting of the Association for Computational LinguisticsCollege Park, MdS. A. Caraballo. 1999. Automatic construction of a hypernym-labeled noun hierarchy from text. In Proceedings of the 37th Annual Meeting of the As- sociation for Computational Linguistics, College Park, Md., 20-26 June. 1999.
Entity Disambiguation for Knowledge Base Population. M Dredze, P Mcnamee, D Rao, A Gerber, T Finin, 23rd International Conference on Computational Linguistics (COLING 2010). Bejing, ChinaM. Dredze, P. McNamee, D. Rao, A. Gerber and T. Finin. 2010. Entity Disambiguation for Knowledge Base Population. 23rd International Conference on Computational Linguistics (COLING 2010), Au- gust 23-27, 2010, Bejing, China
Named Entity Disambiguation by Leveraging Wikipedia Semantic Knowledge. X Han, J Zhao, Proceeding of the 18th ACM conference on Information and knowledge management. eeding of the 18th ACM conference on Information and knowledge managementX. Han and J. Zhao. 2009. Named Entity Disambigu- ation by Leveraging Wikipedia Semantic Know- ledge. Proceeding of the 18th ACM conference on Information and knowledge man- agement (2009).
TIPSTER Complete. LDC93T3A, Philadelphia, Penn. Linguistic Data Consortium. D Harman, M Liberman, D. Harman and M. Liberman. 1993. TIPSTER Com- plete. LDC93T3A, Philadelphia, Penn. Linguistic Data Consortium , 1993.
Automatic acquisition of hyponyms from large text corpora. M A Hearst, Proceedings of the 15th International Conference on Computational Linguistics. the 15th International Conference on Computational LinguisticsNantes, FranceM. A. Hearst. 1992. Automatic acquisition of hypo- nyms from large text corpora. In Proceedings of the 15th International Conference on Computa- tional Linguistics, Nantes, France, 23-28 August 1992.
Obermayer. Large Margin Rank Boundaries for Ordinal Regression. R Herbrich, T Graepel, K Obermayer, Advances in Large Margin Classifiers. R. Herbrich, T. Graepel and K. Obermayer. 2000. Obermayer. Large Margin Rank Boundaries for Ordinal Regression. Advances in Large Margin Classifiers (pp. 115-132). 2000.
Overview of the TAC 2010 Knowledge Base Population Track. H Ji, R Grishman, H Dang, K Griffitt, J Ellis, Proceedings of Text Analysis Conference. Text Analysis ConferenceH. Ji, R. Grishman, H. Dang, K. Griffitt, and J. Ellis. 2010. Overview of the TAC 2010 Knowledge Base Population Track. In Proceedings of Text Analysis Conference 2010.
Making large-Scale SVM Learning Practical. T Joachims, Advances in Kernel Methods -Support Vector Learning. MIT PressT. Joachims. 1999. Making large-Scale SVM Learn- ing Practical. Advances in Kernel Methods -Sup- port Vector Learning, MIT Press, 1999.
Training Linear SVMs in Linear Time. T Joachims, The ACM Conference on Knowledge Discovery and Data Mining (KDD). T. Joachims. 2006. Training Linear SVMs in Linear Time, The ACM Conference on Knowledge Dis- covery and Data Mining (KDD), 2006.
Fast Exact Inference with a Factored Model for Natural Language Parsing. D Klein, C D Manning, Advances in Neural Information Processing Systems 15 (NIPS 2002). Cambridge, MAMIT PressD. Klein and C. D. Manning. 2003. Fast Exact Infe- rence with a Factored Model for Natural Language Parsing. In Advances in Neural Information Processing Systems 15 (NIPS 2002), Cambridge, MA: MIT Press, pp. 3-10.
THU QUANTA at TAC 2009 KBP and RTE Track. F Li, Zheng, Y Bu, Tang, M Zhu, Huang, TAC 09Text Analysis Conference. F. Li, Z Zheng, F Bu, Y Tang, X Zhu, and M Huang. 2009. THU QUANTA at TAC 2009 KBP and RTE Track. Text Analysis Conference 2009 (TAC 09).
On Issues of Instance Selection. H Liu, H Motoda, Data Mining and Knowledge Discovery. 6H. Liu and H. Motoda. 2002. On Issues of Instance Selection. 2002. Data Mining and Knowledge Dis- covery, 6, 115-130. 2002.
Overview of the TAC 2009 Knowledge Base Population Track. P Mcnamee, H T Dang, Proceeding of Text Analysis Conference. eeding of Text Analysis ConferenceP. McNamee and H. T. Dang. 2009. Overview of the TAC 2009 Knowledge Base Population Track. In Proceeding of Text Analysis Conference 2009
HLTCOE Approaches to Knowledge Base Population at TAC. P Mcnamee, TAC 09Proceedings of Text Analysis Conference. Text Analysis ConferenceP. McNamee et al. 2009. HLTCOE Approaches to Knowledge Base Population at TAC 2009. In Pro- ceedings of Text Analysis Conference 2009 (TAC 09).2009.
Deriving a Large Scale Taxonomy from Wikipedia. S P Ponzetto, M Strube, S. P. Ponzetto and M. Strube. 2007. Deriving a Large Scale Taxonomy from Wikipedia.
Proceedings of the 22nd National Conference on Artificial Intelligence. the 22nd National Conference on Artificial IntelligenceVancouver, B.C.In Proceedings of the 22nd National Conference on Artificial Intelligence, Vancouver, B.C., 22-26
Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. D Ramage, D Hall, R Nallapati, C D Manning, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingD. Ramage, D. Hall, R. Nallapati and C. D. Manning. 2009. Labeled LDA: A supervised topic model for credit attribution in multi-labeled corpora. In Pro- ceedings of the 2009 Conference on Empirical Me- thods in Natural Language Processing, 2009.
Multi-Criteria-based Active Learning for Named Entity Recognition. D Shen, J Zhang, J Su, G D Zhou, C L Tan, Proceedings of the ACL. the ACLD. Shen, J. Zhang, J. Su, G. D. Zhou and C. L. Tan. 2004. Multi-Criteria-based Active Learning for Named Entity Recognition. In Proceedings of the ACL 2004.
The Nature of Statistical Leaning Theory. V Vapnik, Springer-VerlagNew YorkV. Vapnik. 1995. The Nature of Statistical Leaning Theory. Springer-Verlag, New York. 1995
IIIT Hyderabad at TAC. V Varma, TAC 09Proceedings of Text Analysis Conference. Text Analysis ConferenceV. Varma et al. 2009. IIIT Hyderabad at TAC 2009. In Proceedings of Text Analysis Conference 2009 (TAC 09).
A Stopping Criterion for Active Learning. A Vlachos, Computer Speech and Language. 223A. Vlachos. 2008. A Stopping Criterion for Active Learning. Computer Speech and Language. 22(3):295-312. 2008.
Building Semantic Kernels for Text Classification Using Wikipedia. P Wang, C Domeniconi, th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 14P. Wang and C. Domeniconi. 2008. Building Seman- tic Kernels for Text Classification Using Wikipe- dia. 14 th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2008
W Zhang, J Su, C L Tan, W T Wang, Entity Linking Leveraging Automatically Generated Annotation. 23rd International Conference on Computational Linguistics. W. Zhang, J. Su, C. L. Tan and W. T. Wang. 2010. Entity Linking Leveraging Automatically Generat- ed Annotation. 23rd International Conference on Computational Linguistics, August 23-27, 2010.
Learning to Link Entities with Knowledge Base. Z Zheng, X Li, M Zhu, Huang, Proceedings of the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL). the 11th Annual Conference of the North American Chapter of the Association for Computational Linguistics (NAACL)Los Angeles, CAZ Zheng, F Li, X Zhu and M Huang. 2010. Learning to Link Entities with Knowledge Base. In Proceed- ings of the 11th Annual Conference of the North American Chapter of the Association for Computa- tional Linguistics (NAACL). 2010. Los Angeles, CA
Distinguishing Between Instances and Classes in the Wikipedia Taxonomy. C Zirn, V Nastase, M Strube, Proceedings of the 5th European Semantic Web Conference. the 5th European Semantic Web ConferenceTenerife, SpainC. Zirn, V. Nastase and M. Strube. 2008. Distinguish- ing Between Instances and Classes in the Wikipe- dia Taxonomy. In Proceedings of the 5th European Semantic Web Conference, Tenerife, Spain, 1-5 June 2008. |
1,709,713 | Unsupervised discovery of morphologically related words based on orthographic and semantic similarity | We present an algorithm that takes an unannotated corpus as its input, and returns a ranked list of probable morphologically related pairs as its output. The algorithm tries to discover morphologically related pairs by looking for pairs that are both orthographically and semantically similar, where orthographic similarity is measured in terms of minimum edit distance, and semantic similarity is measured in terms of mutual information. The procedure does not rely on a morpheme concatenation model, nor on distributional properties of word substrings (such as affix frequency). Experiments with German and English input give encouraging results, both in terms of precision (proportion of good pairs found at various cutoff points of the ranked list), and in terms of a qualitative analysis of the types of morphological patterns discovered by the algorithm. | [
9558665,
10986188,
35102549,
8989479
] | Unsupervised discovery of morphologically related words based on orthographic and semantic similarity
Marco Baronï
OFAI
Schottengasse 3A-1010ViennaAustria
Johannes Matiasek
OFAI
Schottengasse 3 A1010ViennaAustria
Harald Trost harald@ai.univie.ac.at
IMKAI
Freyung 6 A1010ViennaAustria
Unsupervised discovery of morphologically related words based on orthographic and semantic similarity
We present an algorithm that takes an unannotated corpus as its input, and returns a ranked list of probable morphologically related pairs as its output. The algorithm tries to discover morphologically related pairs by looking for pairs that are both orthographically and semantically similar, where orthographic similarity is measured in terms of minimum edit distance, and semantic similarity is measured in terms of mutual information. The procedure does not rely on a morpheme concatenation model, nor on distributional properties of word substrings (such as affix frequency). Experiments with German and English input give encouraging results, both in terms of precision (proportion of good pairs found at various cutoff points of the ranked list), and in terms of a qualitative analysis of the types of morphological patterns discovered by the algorithm.
Introduction
In recent years, there has been much interest in computational models that learn aspects of the morphology of a natural language from raw or structured data. Such models are of great practical interest as tools for descriptive linguistic analysis and for minimizing the expert resources needed to develop morphological analyzers and stemmers. From a theo-retical point of view, morphological learning algorithms can help answer questions related to human language acquisition.
In this study, we present a system that, given a corpus of raw text from a language, returns a ranked list of probable morphologically related word pairs. For example, when run with the Brown corpus as its input, our system returned a list with pairs such as pencil/pencils and structured/unstructured at the top.
Our algorithm is completely knowledge-free, in the sense that it processes raw corpus data, and it does not require any form of a priori information about the language it is applied to. The algorithm performs unsupervised learning, in the sense that it does not require a correctly-coded standard to (iteratively) compare its output against.
The algorithm is based on the simple idea that a combination of formal and semantic cues should be exploited to identify morphologically related pairs. In particular, we use minimum edit distance to measure orthographic similarity, 1 and mutual information to measure semantic similarity.
The algorithm does not rely on the notion of affix, and it does not depend on global distributional properties of substrings (such as affix frequency). Thus, at least in principle, the algorithm is well-suited to discover pairs that are related by rare and/or nonconcatenative morphological processes.
The algorithm returns a list of related pairs, but it does not attempt to extract the patterns that relate the pairs. As such, it can be used as a tool to pre-process corpus data for an analysis to be performed by a human morphologist, or as the first step of a fully automated morphological learning program, to be followed, for example, by a rule induction procedure that extracts correspondence patterns from paired forms. See the last section of this paper for further discussion of possible applications. We tested our model with German and English input. Our results indicate that the algorithm is able to identify a number of pairs related by a variety of derivational and inflectional processes with a remarkably high precision rate. The algorithm is also discovering morphological relationships (such as German plural formation with umlaut) that would probably be harder to discover using affix-based approaches.
The remainder of the paper is organized as follows: In section 2, we shortly review related work. In section 3, we present our model. In section 4, we discuss the results of experiments with German and English input. Finally, in section 5 we summarize our main results, we sketch possible directions that our current work could take, and we discuss some potential uses for the output of our algorithm.
Related work
For space reason, we discuss here only three approaches that are closely related to ours. See, for example, Goldsmith (2001) for a very different (possibly complementary) approach, and for a review of other relevant work.
Jacquemin (1997)
Jacquemin (1997) presents a model that automatically extracts morphologically related forms from a list of English two-word medical terms and a corpus from the medical domain.
The algorithm looks for correspondences between two-word terms and orthographically similar pairs of words that are adjacent in the corpus. For example, the list contains the term artificial ventilation, and the corpus contains the phrase artificially ventilated. Jacquemin's algorithm thus postulates the (paired) morphological analyses artificial ventilation and artificial-ly ventilat-ed.
Similar words, for the purposes of this pairing procedure, are simply words that share a common left substring (with constraints that we do not discuss here).
Jacquemin's procedure then builds upon these early steps by clustering together sets that follow the same patterns, and using these larger classes to look for spurious analyses. Finally, the algorithm tries to cluster classes that are related by similar, rather than identical, suffixation patterns. Again, we will not describe here how this is accomplished.
Our basic idea is related to that of Jacquemin, but we propose an approach that is more general both in terms of orthography and in terms of semantics. In terms of orthography, we do not require that two strings share the left (or right) substring in order to constitute a candidate pair. Thus, we are not limited to affixal morphological patterns. Moreover, our algorithm extracts semantic information directly from the input corpus, and thus it does not require a precompiled list of semantically related pairs.
Schone and Jurafsky (2000)
Schone and Jurafsky (2000) present a knowledgefree unsupervised model in which orthographybased distributional cues are combined with semantic information automatically extracted from word co-occurrence patterns in the input corpus.
They first look for potential suffixes by searching for frequent word-final substrings. Then, they look for potentially morphologically related pairs, i.e., pairs that end in potential suffixes and share the left substring preceding those suffixes. Finally, they look, among those pairs, for those whose semantic vectors (computed using latent semantic analysis) are significantly correlated. In short, the idea behind the semantic component of their model is that words that tend to co-occur with the same set of words, within a certain window of text, are likely to be semantically correlated words.
While we follow Schone and Jurafsky's idea of combining orthographic and semantic cues, our algorithm differs from them in both respects. From the point of view of orthography, we rely on the comparison between individual word pairs, without requiring that the two pairs share a frequent affix, and indeed without requiring that they share an affix at all.
From the point of view of semantics, we compute scores based on mutual information instead of latent semantic analysis. Thus, we only look at the cooccurrence patterns of target words, rather than at the similarity of their contexts.
Future research should try to assess to what extent these two approaches produce significantly different results, and/or to what extent they are complementary.
Yarowsky and Wicentowski (2000)
Yarowsky and Wicentowski (2000) propose an algorithm that extracts morphological rules relating roots and inflected forms of verbs (but the algorithm can be extended to other morphological relations).
Their algorithm performs unsupervised, but not completely knowledge-free, learning. It requires a table of canonical suffixes for the relevant parts of speech of the target language, a list of the content word roots with their POS (and some information about the possible POS/inflectional features of other words), a list of the consonants and vowels of the language, information about some characteristic syntactic patterns and, if available, a list of function words.
The algorithm uses a combination of different probabilistic models to find pairs that are likely to be morphologically related. One model matches root + inflected form pairs that have a similar frequency profile. Another model matches root + inflected form pairs that tend to co-occur with the same subjects and objects (identified using simple regular expressions). Yet another model looks for words that are orthographically similar, in terms of a minimum edit distance score that penalizes consonant changes more than vowel changes. Finally, the rules relating stems and inflected forms that the algorithm extracts from the pairs it finds in an iteration are used as a fourth probabilistic model in the subsequent iterations.
Yarowsky and Wicentowski show that the algorithm is extremely accurate in identifying English root + past tense form pairs, including those pairs that are related by non-affixal patterns (e.g., think/thought.)
The main issue with this model is, of course, that it cannot be applied to a new target language without having some a priori knowledge about some of its linguistic properties. Thus, the algorithm cannot be applied in cases in which the grammar of the target language has not been properly described yet, or when the relevant information is not available for other reasons. Moreover, even when such information is in principle available, trying to determine to what extent morphology could be learned without relying on any other knowledge source remains an interesting theoretical pursuit, and one whose answer could shed some light on the problem of human language acquisition.
The current approach: Morphological relatedness as a function of orthographic and semantic similarity
The basic intuition behind the model presented here is extremely simple: Morphologically related words tend to be both orthographically and semantically similar. Obviously, there are many words that are orthographically similar, but are not morphologically related; for example, blue and glue. At the same time, many semantically related words are not morphologically related (for example, blue and green). However, if two words have a similar shape and a related meaning (e.g., green and greenish), they are very likely to be also morphologically related.
In order to make this idea concrete, we use minimum edit distance to identify words that are orthographically similar, and mutual information between words to identify semantically related words.
Outline of the procedure
Given an unannotated input corpus, the algorithm (after some elementary tokenization) extracts a list of candidate content words. This is simply a list of all the alphabetic space-or punctuation-delimited strings in the corpus that have a corpus frequency below .01% of the total token count. 2 Preliminary experiments indicated that our procedure does not perform as well without this trimming. Notice in any case that function words tend to be of little morphological interest, as they display highly lexicalized, often suppletive morphological patterns.
The word list extracted as described above and the input corpus are used to compute two lists of word pairs: An orthographic similarity list, in which the pairs are scored on the basis of their minimum edit distance, and a semantic similarity list, based on mutual information. Because of minimum thresholds that are enforced during the computation of the two measures, neither list contains all the pairs that can in principle be constructed from the input list.
Before computing the combined score, we get rid of the pairs that do not occur in both lists (the rationale being that we do not want to guess the morphological status of a pair on the sole basis of orthographic or semantic evidence).
We then compute a weighted sum of the orthographic and semantic similarity scores of each remaining pair. In the experiments reported below, the weights are chosen so that the maximum weighted scores for the two measures are in the same order of magnitude (we prefer to align maxima rather than means because both lists are trimmed at the bottom, making means and other measures of central tendency less meaningful).
The pairs are finally ranked on the basis of the resulting combined scores.
In the next subsections, we describe how the orthographic and semantic similarity lists are constructed, and some properties of the measures we adopted.
Scoring the orthographic similarity of word pairs
Like Yarowsky and Wicentowski, we use minimum edit distance to measure orthographic similarity. The minimum edit distance between two strings is the minimum number of editing operations (insertion, deletion, substitution) needed to transform one string into the other (see section 5.6 of Jurafsky and Martin (2000) and the references quoted there). Unlike Yarowsky and Wicentowski, we do not attempt to define a phonologically sensible edit distance scoring function, as this would require making assumptions about how the phonology of the target language maps onto its orthography, thus falling outside the domain of knowledge-free induction. Instead, we assign a cost of 1 to all editing operations, independently of the nature of the source and target segments. Thus, in our system, the pairs dog/Dog, man/men, bat/mat and day/dry are all assigned a minimum edit distance of 1. 3 Rather than computing absolute minimum edit distance, we normalize this measure by dividing it by the length of the longest string (this corresponds to the intuition that, say, two substitutions are less significant if we are comparing two eightletter words than if we are comparing two threeletter words). Moreover, since we want to rank pairs on the basis of orthographic similarity, rather than dissimilarity, we compute (1 -normalized minimum edit distance), obtaining a measure that ranges from 1 for identical forms to 0 for forms that do not share any character.
This measure is computed for all pairs of words in the potential content word list. However, for reasons of size, only pairs that have a score of .5 or higher (i.e., where the two members share at least half of their characters) are recorded in the output list.
Notice that orthographic similarity does not favor concatenative affixal morphology over other types of morphological processes. For example, the pairs woman/women and park/parks both have an orthographic similarity score of .8.
Moreover, orthographic similarity depends only on the two words being compared, and not on global distributional properties of these words and their substrings. Thus, words related by a rare morphological pattern can have the same score as words related by a very frequent pattern, as long as the minimum edit distance is the same. For example, both nucleus/nuclei and bench/benches have an orthographic similarity score of .714, despite the fact that the latter pair reflects a much more common pluralization pattern.
Of course, this emancipation from edge-anchored concatenation and global distributional salience also implies that orthographic similarity will assign high scores to many pairs that are not morphologically related -for example, the pair friends/trends also has an orthographic similarity score of .714.
Furthermore, since in most languages the range of possible word lengths is narrow, orthographic similarity as a ranking measure tends to suffer of a "massive tying" problem. For example, when pairs from the German corpus described below are ranked on the sole basis of orthographic similarity, the resulting list is headed by a block of 19,597 pairs that all have the same score. These are all pairs where one word has 9 characters, the other 9 or 8 characters, and the two differ in only one character. 4 For the above reasons, it is crucial that orthographic similarity is combined with an independent measure that allows us to distinguish between similarity due to morphological relatedness vs. similarity due to chance or other reasons.
Scoring the semantic similarity of word pairs
Measuring the semantic similarity of words on the basis of raw corpus data is obviously a much harder task than measuring the orthographic similarity of words.
Mutual information (first introduced to computational linguistics by Church and Hanks (1989)) is one of many measures that seems to be roughly correlated to the degree of semantic relatedness between words. The mutual information between two words A and B is given by:
I(A, B) = log P r(A, B) P r(A)P r(B)(1)
Intuitively, the larger the deviation between the empirical frequency of co-occurrence of two words and the expected frequency of co-occurrence if they were independent, the more likely it is that the occurrence of one of the two words is not independent from the occurrence of the other. Brown et alii (1990) observed that when mutual information is computed in a bi-directional fashion, and by counting co-occurrences of words within a 4 Most of the pairs in this block -78% -are actually morphologically related. However, given that all pairs contain words of length 9 and 8/9 that differ in one character only, they are bound to reflect only a very small subset of the morphological processes present in German. relatively large window, but excluding "close" cooccurrences (which would tend to capture collocations and lexicalized phrases), the measure identifies semantically related pairs.
It is particularly interesting for our purposes that most of the examples of English word clusters constructed on the basis of this interpretation of mutual information by Brown and colleagues (reported in their table 6) include morphologically related words. A similar pattern emerges among the examples of German words clustered in a similar manner by Baroni et alii (2002). Rosenfeld (1996) reports that morphologically related pairs are common among words with a high (average) mutual information.
We computed mutual information by considering, for each pair, only co-occurrences within a maximal window of 500 words and outside a minimal window of 3 words. Given that mutual information is notoriously unreliable at low frequencies (see, for example, Manning and Schütze (1999), section 5.4), we only collected mutual information scores for pairs that co-occurred at least three times (within the relevant window) in the input corpus. Obviously, occurrences across article boundaries were not counted. Notice however that the version of the Brown corpus we used does not mark article boundaries. Thus, in this case the whole corpus was treated as a single article.
Our "semantic" similarity measure is based on the notion that related words will tend to often occur in the nears of each other. This differs from the (more general) approach of Schone and Jurafsky (2000), who look for words that tend to occur in the same context. It remains an open question whether the two approaches produce complementary or redundant results. 5 Taken by itself, mutual information is a worse predictor of morphological relatedness than minimum edit distance. For example, among the top one hundred pairs ranked by mutual information in each language, only one German pair and five English pairs are morphologically motivated. This poor performance is not too surprising, given that there are plenty of words that often co-occur together without being morphologically related. Consider for example (from our English list) the pairs index/operand and orthodontist/teeth.
Empirical evaluation
Materials
We tested our procedure on the German APA corpus, a corpus of newswire containing over twenty-eight million word tokens, and on the English Brown corpus (Kučera and Francis, 1967), a balanced corpus containing less than one million two hundred thousand word tokens. Of course, the most important difference between these two corpora is that they represent different languages. However, observe also that they have very different sizes, and that they are different in terms of the types of texts constituting them.
Besides the high frequency trimming procedure described above, for both languages we removed from the potential content word lists those words that were not recognized by the XEROX morphological analyzer for the relevant language. The reason for this is that, as we describe below, we use this tool to build the reference sets for evaluation purposes. Thus, morphologically related pairs composed of words not recognized by the analyzer would unfairly lower the precision of our algorithm.
Moreover, after some preliminary experimentation, we also decided to remove words longer than 9 characters from the German list (this corresponds to trimming words whose length is one standard deviation or more above the average token length). This actually lowers the performance of our system, but makes the results easier to analyze -otherwise, the top of the German list would be cluttered by a high number of rather uninteresting morphological pairs formed by inflected forms from the paradigm of very long nominal compounds (such as Wirtschaftsforschungsinstitut 'institute for economic research').
Unlike high frequency trimming, the two operations we just described are meant to facilitate empirical evaluation, and they do not constitute necessary steps of the core algorithm.
Precision
In order to evaluate the precision obtained by our procedure, we constructed a list of all the pairs that, according to the analysis provided by the XEROX analyzer for the relevant language, are morphologically related (i.e., share one of their stems). 6 We refer to the lists constructed in the way we just described as reference sets.
The XEROX tools we used do not provide derivational analysis for English, and a limited form of derivational analysis for German. Our algorithm, however, finds both inflectionally and derivationally related pairs. Thus, basing our evaluation on a comparison with the XEROX parses leads to an underestimation of the precision of the algorithm. We found that this problem is particularly evident in English, since English, unlike German, has a rather poor inflectional morphology, and thus the discrepancies between our output and the analyzer parses in terms of derivational morphology have a more visible impact on the results of the comparison. For example, the English analyzer does not treat pairs related by the adverbial suffix -ly or by the prefix un-as morphologically related, whereas our algorithm found pairs such as soft/softly and load/unload.
In order to obtain a more fair assessment of the algorithm, we went manually through the first 2,000 English pairs found by our algorithm but not parsed as related by the analyzer, looking for items to be added to the reference set. We were extremely conservative, and we added to the reference set only those pairs that are related by a transparent and synchronically productive morphological pattern. When in doubt, we did not correct the analyzerbased analysis. Thus, for example, we did not count pairs such as machine/machinery, variables/varies or electric/electronic as related.
We did not perform any manual post-processing on the German reference set.
Tables 1 and 2 report percentage precision (i.e., the percentage of pairs that are in the reference set over the total number of ranked pairs up to the relevant threshold) at various cutoff points, for German and English respectively. For both languages we notice a remarkably high precision rate (> 90%) up to the 1500-pair cutoff point.
After that, there is a sharper drop in the English precision, whereas the decline in German is more gradual. This is perhaps due in part to the problems with the English reference set we discussed above, but notice also that English has an overall poorer morphological system and that the English corpus is considerably smaller than the German one. Indeed, our reference set for German contains more than ten times the forms in the English reference set.
Notice anyway that, for both languages, the precision rate is still around 50% at the 5000-pair cutoff. 7 7 Yarowsky and Wicentowski (2000) report an accuracy of over 99% for their best model and a test set of 3888 pairs. Our precision rate at a comparable cutoff point is much lower (58% at the 4000-pair cutoff). However, Yarowksy and Wicentowski restricted the possible matchings to pairs in which one member is an inflected verb form, and the other member is a potential verbal root, whereas in our experiments any word in the corpus (as long as it was below a certain frequency threshold, and it was recognized by the XEROX analyzer) could be matched with any other word in the corpus. Thus, on the one hand, Yarowsky and Wicentowski forced the algorithm to produce a matching for a certain set of words (their set of inflected forms), whereas our algorithm was not subject to an analogous constraint. On the other hand, though, our algorithm had to explore a much larger possible matching space, and it could (and did) make a high number of mistakes on pairs (such as, e.g., sorry and worry) that
Of course, what counts as a "good" precision rate depends on what we want to do with the output of our procedure. We show below that even a very naive morphological rule extraction algorithm can extract sensible rules by taking whole output lists as its input, since, although the number of false positives is high, they are mostly related by patterns that are not attested as frequently in the list as the patterns relating true morphological pairs. In other words, true morphological pairs tend to be related by patterns that are distributionally more robust than those displayed by false positives. Thus, rule extractors and other procedures processing the output of our algorithm can probably tolerate a high false positive rate if they take frequency and other distributional properties of patterns into account.
Notice that we discussed only precision, and not recall. This is because we believe that the goal of a morphological discovery procedure is not to find the exhaustive list of all morphologically related forms in a language (indeed, because of morphological productivity, such list is infinite), but rather to discover all the possible (synchronically active and/or common) morphological processes present in a language. It is much harder to measure how good our algorithm performed in this respect, but the qualitative analysis we present in the next subsection indicates that, at least, the algorithm discovers a varied and interesting set of morphological processes.
Morphological patterns discovered by the algorithm
The precision tables confirm that the algorithm found a good number of morphologically related pairs. However, if it turned out that all of these pairs were examples of the same morphological pattern (say, nominal plural formation in -s), the algorithm would not be of much use. Moreover, we stated at the beginning that, since our algorithm does not assume an edge-based stem+affix concatenation model of morphology, it should be well suited to discover relations that cannot be characterized in these terms (e.g., pairs related by circumfixation, stem changes, etc.). It is interesting to check whether the algorithm was indeed able to find relations of this sort. Thus, we performed a qualitative analysis of the output of the algorithm, trying to understand what kind of morphological processes were captured by it.
In order to look for morphological processes in the algorithm output, we wrote a program that extracts "correspondence rules" in the following simple way: For each pair, the program looks for the longest shared (case-insensitive) left-and right-edge substrings (i.e., for a stem + suffix parse and for a prefix + stem parse). The program then chooses the parse with the longest stem (assuming that one of the two parses has a non-zero stem), and extracts the relevant edge-bound correspondence rule. If there is a tie, the stem + suffix parse is preferred. The program then ranks the correspondence rules on the basis of their frequency of occurrence in the original output list. 8 We want to stress that we are adopting this procedure as a method to explore the results, and we are by no means proposing it as a serious rule induction algorithm. One of the most obvious drawbacks of the current rule extraction procedure is that it is only able to extract linear, concatenative, edge-bound suffixation and prefixation patterns, and thus it misses or fails to correctly generalize some of the most interesting patterns in the output. Indeed, looking at the patterns missed by the algorithm (as we do in part below) is as instructive as looking at the rules it found.
Tables 3 and 4 report the top five suffixation and prefixation patterns found by the rule extractor by taking the entire German and English output lists as its input.
These tables show that our morphological pair scoring procedure found many instances of various common morphological patterns. With the exception of the German "prefixation" rule ers↔drit (actually relating the roots of the ordinals 'first' and 'second'), and of the compounding pattern ↔Öl ('Oil'), all the rules in these lists correspond to realistic affixation patterns. Not surprisingly, in both 8 Ranking by cumulative score yields analogous results. The results reported in these tables confirm that the algorithm is capturing common affixation processes, but they are based on patterns that are so frequent that even a very naive procedure could uncover them 9 More interesting observations emerge from further inspection of the ranked rule files. For example, among the 70 most frequent German suffixation rules extracted by the procedure, we encounter those in table 5. 10 The patterns in this table show that our algorithm is capturing the non-concatenative plural formation 9 For example, as shown by a reviewer, a procedure that pairs words that share the same first five letters, and extracts the diverging substrings following the common prefix from each pair. 10 In order to find the set of rules presented in table 5 using the naive algorithm described in the previous footnote, we would have to consider the 2672 most frequent rules. Most of these 2672 rules, of course, do not correspond to true morphological patterns -thus, the interesting rules would be buried in noise. rule example fq ag↔äge Anschlag↔Anschläge 10 ang↔änge Rückgang↔Rückgänge 6 all↔älleÜberfall↔Überfälle 6 ug↔üge Tiefflug↔Tiefflüge 5 and↔ände Vorstand↔Vorstände 5 uch↔üche Einbruch↔Einbrüche 3 auf↔äufe Verkauf↔Verkäufe 3 ag↔ägen Vertrag↔Verträgen 3 Table 5: Some German rules involving stem vowel changes found by the rule extractor process involving fronting of the stem vowel plus addition of a suffix (-e/-en). A smarter rule extractor should be able to generalize from patterns like these to a smaller number of more general rules capturing the discontinuous change. Other umlaut-based patterns that do not involve concomitant suffixationsuch as in Mutter/Mütter -were also found by our core algorithm, but they were wrongly parsed as involving prefixes (e.g., Mu↔Mü) by the rule extractor. Finally, it is very interesting to look at those pairs that are morphologically related according to the XEROX analyzer, and that were discovered by our algorithm, but where the rule extractor could not posit a rule, since they do not share a substring at either edge. These are listed, for German, in We notice in this table, besides three further instances of non-affixal morphology, a majority of pairs involving circumfixation of one of the members.
While a more in-depth qualitative analysis of our results should be conducted, the examples we discussed here confirm that our algorithm is able to capture a number of different morphological patterns, including some that do not fit into a strictly concate-native edge-bound stem+affix model.
Conclusion and Future Directions
We presented an algorithm that, by taking a raw corpus as its input, produces a ranked list of morphologically related pairs at its output. The algorithm finds morphologically related pairs by looking at the degree of orthographic similarity (measured by minimum edit distance) and semantic similarity (measured by mutual information) between words from the input corpus.
Experiments with German and English inputs gave encouraging results, both in terms of precision, and in terms of the nature of the morphological patterns found within the output set.
In work in progress, we are exploring various possible improvements to our basic algorithm, including iterative re-estimation of edit costs, addition of a context-similarity-based measure, and extension of the output set by morphological transitivity, i.e. the idea that if word a is related to word b, and word b is related to word c, then word a and word c should also form a morphological pair.
Moreover, we plan to explore ways to relax the requirement that all pairs must have a certain degree of semantic similarity to be treated as morphologically related (there is evidence that humans treat certain kinds of semantically opaque forms as morphologically complex -see Baroni (2000) and the references quoted there). This will probably involve taking distributional properties of word substrings into account.
From the point of view of the evaluation of the algorithm, we should design an assessment scheme that would make our experimental results more directly comparable to those of Yarowsky and Wicentowski (2000), Schone and Jurafsky (2000) and others. Moreover, a more in depth qualitative analysis of the results should concentrate on identifying specific classes of morphological processes that our algorithm can or cannot identify correctly.
We envisage a number of possible uses for the ranked list that constitutes the output of our model. First, the model could provide the input for a more sophisticated rule extractor, along the lines of those proposed by Albright and Hayes (1999) and Neuvel (2002). Such models extract morphological generalizations in terms of correspondence patterns between whole words, rather than in terms of affixation rules, and are thus well suited to identify patterns involving non-concatenative morphology and/or morphophonological changes. A list of related words constitutes a more suitable input for them than a list of words segmented into morphemes.
Rules extracted in this way would have a number of practical uses -for example, they could be used to construct stemmers for information retrieval applications, or they could be integrated into morphological analyzers.
Our procedure could also be used to replace the first step of algorithms, such as those of Goldsmith (2001) and Snover and Brent (2001), where heuristic methods are employed to generate morphological hypotheses, and then an informationtheoretically/probabilistically motivated measure is used to evaluate or improve such hypotheses. More in general, our algorithm can help reduce the size of the search space that all morphological discovery procedures must explore.
Last but not least, the ranked output of (an improved version of) our algorithm can be of use to the linguist analyzing the morphology of a language, who can treat it as a way to pre-process her/his data, while still relying on her/his analytical skills to extract the relevant morphological generalizations from the ranked pairs.
Table 2 :
2English precision at various cutoff points
(8902 = total number of pairs)
Table 3 :
3The most common German suffixation and
prefixation patterns
rule
example
fq
↔s
allotment↔allotments
860
↔ed
accomplish↔accomplished
98
ed↔ing established↔establishing
87
↔ing
experiment↔experimenting
85
↔d
conjugate↔conjugated
58
↔un
structured↔unstructured
17
↔re
organization↔reorganization
12
↔in
organic↔inorganic
7
↔non
specifically↔nonspecifically
6
↔dis
satisfied↔dissatisfied
5
Table 4 :
4The most common English suffixation and
prefixation patterns
languages many of the most frequent rules (such as,
e.g., ↔s) are poly-functional, corresponding to a
number of different morphological relations within
and across categories.
table 6 .
6Alterälteren
fordern gefordert
ArztÄrzte
forderten gefordert
ArztesÄrzte
fördern gefördert
Fesseln gefesselt
genannt nannte
Folter gefoltert
genannten nannte
Putsch geputscht
geprallt prallte
Spende gespendet
gesetzt setzte
Spenden gespendet gestürzt stürzte
Streik gestreikt
Table 6 :
6Morphologically related German pairs that do not share an edge found by the basic algorithm
Given phonetically transcribed input, our model would compute phonetic similarity instead of orthographic similarity.
In future versions of the algorithm, we plan to make this high frequency threshold dependent on the size of the input corpus.
Following a suggestion by two reviewers, we are currently experimenting with an iterative version of our algorithm, along the lines of the one described by Yarowsky and Wicentowski. We start with the cost matrix described in the text, but we reestimate the editing costs on the basis of the empirical characterto-character (or character-to-zero/zero-to-character) probabilities observed in the output of the previous run of the algorithm. Surprisingly, the revised version of the algorithm leads to (moderately) worse results than the single-run version described in this paper. Further experimentation with edit cost re-estimation is needed, in order to understand which aspects of our iterative procedure make it worse than the single-run model, and how it could be improved.
We are currently experimenting with a measure based on semantic context similarity (determined on the basis of classbased left-to-right and right-to-left bigrams), but the current implementation of this requires ad hoc corpus-specific settings to produce interesting results with both our test corpora.
The XEROX morphological analyzers are state-of-the-art, knowledge-driven morphological analysis tools (see for exampleKarttunen et alii (1997)).
Yarowksy and Wicentowski's algorithm did not have to consider.Schone and Jurafsky (2000) report a maximum precision of 92%. It is hard to compare this with our results, since they use a more sophisticated scoring method (based on paradigms rather than pairs) and a different type of gold standard. Moreover, they do not specify what was the size of the input they used for evaluation.
AcknowledgementsWe would like to thank Adam Albright, Bruce Hayes and the anonymous reviewers for helpful comments, and the Austria Presse Agentur for kindly making the APA corpus available to us. This work was supported by the European Union in the framework of the IST programme, project FASTY (IST-2000-25420). Financial support forÖFAI is provided by the Austrian Federal Ministry of Education, Science and Culture.
An automated learner for phonology and morphology. UCLA manuscript. A Albright, B Hayes, A. Albright and B. Hayes. 1999. An automated learner for phonology and morphology. UCLA manuscript.
Distributional cues in morpheme discovery: A computational model and empirical evidence. M Baroni, Ph.D. dissertation, UCLAM. Baroni. 2000. Distributional cues in morpheme discovery: A computational model and empirical ev- idence. Ph.D. dissertation, UCLA.
Wordformand class-based prediction of the components of German nominal compounds in an AAC system. M Baroni, J Matiasek, H Trost, To appear in Proceedings of COLING. M. Baroni, J. Matiasek and H. Trost. 2002. Wordform- and class-based prediction of the components of Ger- man nominal compounds in an AAC system. To ap- pear in Proceedings of COLING 2002.
Class-based n-gram models of natural language. P Brown, P Della Pietra, P Desouza, J Lai, R Mercer, Computational Linguistics. 18P. Brown, P. Della Pietra, P. DeSouza, J. Lai, and R. Mer- cer. 1990. Class-based n-gram models of natural lan- guage. Computational Linguistics, 18:467-479.
Word association norms, mutual information, and lexicography. K Church, P Hanks, Proceedings of ACL 27. ACL 27K. Church and P. Hanks. 1989. Word association norms, mutual information, and lexicography. Proceedings of ACL 27, 76-83.
Unsupervised learning of the morphology of a natural language. J Goldsmith, Computational Linguistics. 27J. Goldsmith. 2001. Unsupervised learning of the mor- phology of a natural language. Computational Lin- guistics, 27:153-198.
Guessing morphology from terms and corpora. C Jacquemin, Proceedings of SIGIR 97. SIGIR 97C. Jacquemin. 1997. Guessing morphology from terms and corpora. Proceedings of SIGIR 97, 156-265.
Speech and Language Processing. D Jurafsky, J Martin, Prentice-HallUpper Saddle River, NJD. Jurafsky and J. Martin. 2000. Speech and Language Processing. Prentice-Hall, Upper Saddle River, NJ.
L Karttunen, K Gaál, A Kempe, Xerox Finite-State Tool. Xerox Research Centre Europe, GrenobleL. Karttunen, K. Gaál, and A. Kempe. 1997. Xe- rox Finite-State Tool Xerox Research Centre Europe, Grenoble.
Computational analysis of present-day American English. H Kučera, N Francis, Brown University PressProvidence, RIH. Kučera and N. Francis. 1967. Computational analysis of present-day American English. Brown University Press, Providence, RI.
Foundations of statistical natural language processing. C Manning, H Schütze, MIT PressCambridge, MASSC. Manning and H. Schütze. 1999. Foundations of sta- tistical natural language processing. MIT Press, Cam- bridge, MASS.
Whole word morphologizer. Expanding the word-based lexicon: A non-stochastic computational approach. S Neuvel, Brain and Language. in pressS. Neuvel. 2002. Whole word morphologizer. Expand- ing the word-based lexicon: A non-stochastic compu- tational approach. Brain and Language, in press.
A maximum entropy approach to adaptive statistical language modeling. R Rosenfeld, Computer Speech and Language. 10R. Rosenfeld. 1996. A maximum entropy approach to adaptive statistical language modeling. Computer Speech and Language, 10:187-228.
Knowldedge-free induction of morphology using latent semantic analysis. P Schone, D Jurafsky, Proceedings of the Conference on Computational Natural Language Learning. the Conference on Computational Natural Language LearningP. Schone and D. Jurafsky. 2000. Knowldedge-free in- duction of morphology using latent semantic analysis. Proceedings of the Conference on Computational Nat- ural Language Learning.
A Bayesian model for morpheme and paradigm identification. M Snover, M Brent, Proceedings of ACL. ACL39M. Snover and M. Brent. 2001. A Bayesian model for morpheme and paradigm identification. Proceedings of ACL 39, 482-490.
Minimally supervised morphological analysis by multimodal alignment. D Yarowksy, R Wicentowski, Proceedings of ACL. ACL38D. Yarowksy and R. Wicentowski. 2000. Minimally su- pervised morphological analysis by multimodal align- ment. Proceedings of ACL 38, 207-216. |
243,865,589 | MLEC-QA: A Chinese Multi-Choice Biomedical Question Answering Dataset | Question Answering (QA) has been successfully applied in scenarios of human-computer interaction such as chatbots and search engines. However, for the specific biomedical domain, QA systems are still immature due to expert-annotated datasets being limited by category and scale. In this paper, we present MLEC-QA, the largest-scale Chinese multi-choice biomedical QA dataset, collected from the National Medical Licensing Examination in China. The dataset is composed of five subsets with 136,236 biomedical multi-choice questions with extra materials (images or tables) annotated by human experts, and first covers the following biomedical sub-fields: Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Traditional Chinese Medicine Combined with Western Medicine. We implement eight representative control methods and open-domain QA methods as baselines. Experimental results demonstrate that even the current best model can only achieve accuracies between 40% to 55% on five subsets, especially performing poorly on questions that require sophisticated reasoning ability. We hope the release of the MLEC-QA dataset can serve as a valuable resource for research and evaluation in open-domain QA, and also make advances for biomedical QA systems. 1 | [
184487171,
189060,
52967399,
3618568,
226262377,
52158121,
202572622,
199379474,
13403541
] | MLEC-QA: A Chinese Multi-Choice Biomedical Question Answering Dataset
Association for Computational LinguisticsCopyright Association for Computational LinguisticsNovember 7-11, 2021. 2021
Jing Li
College of Computer and Data Science
Fuzhou University
FuzhouChina
Shangping Zhong spzhong@fzu.edu.cn
College of Computer and Data Science
Fuzhou University
FuzhouChina
Kaizhi Chen
College of Computer and Data Science
Fuzhou University
FuzhouChina
MLEC-QA: A Chinese Multi-Choice Biomedical Question Answering Dataset
Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing
the 2021 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsNovember 7-11, 2021. 20218862
Question Answering (QA) has been successfully applied in scenarios of human-computer interaction such as chatbots and search engines. However, for the specific biomedical domain, QA systems are still immature due to expert-annotated datasets being limited by category and scale. In this paper, we present MLEC-QA, the largest-scale Chinese multi-choice biomedical QA dataset, collected from the National Medical Licensing Examination in China. The dataset is composed of five subsets with 136,236 biomedical multi-choice questions with extra materials (images or tables) annotated by human experts, and first covers the following biomedical sub-fields: Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Traditional Chinese Medicine Combined with Western Medicine. We implement eight representative control methods and open-domain QA methods as baselines. Experimental results demonstrate that even the current best model can only achieve accuracies between 40% to 55% on five subsets, especially performing poorly on questions that require sophisticated reasoning ability. We hope the release of the MLEC-QA dataset can serve as a valuable resource for research and evaluation in open-domain QA, and also make advances for biomedical QA systems. 1
Introduction
As a branch of the QA task, Biomedical Question Answering (BQA) enables effectively perceiving, accessing, and understanding complex biomedical knowledge by innovative applications, which makes BQA an important QA application in the biomedical domain (Jin et al., 2021). Such a task has recently attracted considerable attention from the NLP community (Zweigenbaum, 2003;He et al., 2020b;Jin et al., 2020), but is still confronted with the following three key challenges: 1 https://github.com/Judenpech/MLEC-QA (1) Most work attempt to build BQA systems with deep learning and neural network techniques (Ben Abacha et al., 2017, 2019bPampari et al., 2018) and are thus data-hungry. However, annotating large-scale biomedical question-answer pairs with high quality is prohibitively expensive. As a result, current expert-annotated BQA datasets are small in size.
(2) Multi-choice QA is a typical format type of BQA dataset. Most previous work focus on such format type of datasets in which contents are in the field of clinical medicine (Zhang et al., 2018b;Jin et al., 2020) and consumer health (Zhang et al., 2017(Zhang et al., , 2018aHe et al., 2019;Tian et al., 2019). However, there are many other specialized sub-fields in biomedicine that have not been studied before (e.g., Stomatology).
(3) Ideal BQA systems should not only focus on raw text data, but also fully utilize various types of biomedical resources, such as images and tables. Unfortunately, most BQA datasets are either texts (Tsatsaronis et al., 2015;Pampari et al., 2018;Jin et al., 2019) or images (Lau et al., 2018;Ben Abacha et al., 2019a;He et al., 2020a); as a result, BQA datasets that are composed by fusing different biomedical resources are relatively limited.
To push forward the variety of BQA datasets, we present MLEC-QA, the largest-scale Chinese multi-choice BQA dataset. Questions in MLEC-QA are collected from the National Medical Licensing Examination in China (NMLEC) 2 , which are carefully designed by human experts to evaluate professional knowledge and skills for those who want to be medical practitioners in China. The NMLEC has a total number of 24 categories of exams, but only five of them have the written exams in Chinese. Every year, only around 18-22% of applicants can pass one of these exams, showing the complexity and difficulty of passing them even for skilled humans.
There are three main properties of MLEC-QA: (1) MLEC-QA is the largest-scale Chinese multichoice BQA dataset, containing 136,236 questions with extra materials (images or tables), Table 1 shows an example. (2) MLEC-QA first covers the following biomedical sub-fields: Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Traditional Chinese Medicine Combined with Western Medicine (denoted as Chinese Western Medicine). Only one (Clinic) of them has been studied in previous research. (3) MLEC-QA provides extra labels of five question types (A1, A2, A3/A4 and B1) for each question, and an in-depth analysis of the most frequent reasoning types of the questions in MLEC-QA, such as lexical matching, multi-sentence reading and concept summary, etc. Detailed analysis can be found in Section 3.2. Examples of sub-fields and question types are summarized in Table 2. We set each example of five question types corresponding to one of the subfields due to page limits. As an attempt to solve MLEC-QA and provide strong baselines, we implement eight representative control methods and open-domain QA methods by a two-stage retriever-reader framework: (1) A retriever finding documents that (might) contain an answer from a large collection of documents. We adopt Chinese Wikipedia dumps 3 as our information sources, and use a distributed search and 3 https://dumps.wikimedia.org/ analytics engine, ElasticSearch 4 , as the document store and document retriever. (2) A reader finding the answer in given documents retrieved by the retriever. We fine-tune five pre-trained language models for machine reading comprehension as the reader. Experimental results show that even the current best model can only achieve accuracies of 53%, 44%, 40%, 55%, and 50% on the five categories of subsets: Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Chinese Western Medicine, respectively. The models especially perform poorly on questions that require understanding comprehensive biomedical concepts and handling complex reasoning. In summary, the major contributions of this paper are threefold:
• We present MLEC-QA, the largest-scale Chinese multi-choice BQA dataset with extra materials, and it first covers five biomedical sub-fields, only one of which has been studied in previous research.
• We conduct an in-depth analysis on MLEC-QA, revealing that both comprehensive biomedical knowledge and sophisticated reasoning ability are required to answer questions. • We implement eight representative methods as baselines and show the performance of existing methods on MLEC-QA, and provide an outlook for future research directions.
Related Work
Open-Domain BQA The Text REtrieval Conference (TREC) (Voorhees and Tice, 2000) has triggered the open-domain BQA research. At the time, most traditional BQA systems were employing complex pipelines with question processing, document/passage retrieval, and answer processing modules. Examples of such systems include EPoCare (Niu et al., 2003), MedQA (Yu et al., 2007;Terol et al., 2007;Wang et al., 2007) and AskHERMES (Cao et al., 2011). With the introduction of various BQA datasets that are focused on specific biomedical topics, such as BioASQ (Tsatsaronis et al., 2015), emrQA (Pampari et al., 2018) and PubMedQA (Jin et al., 2019), pioneered by Chen et al. (2017), the modern open-domain BQA systems largely simplified the traditional BQA pipeline to a two-stage retriever-reader framework by combining information retrieval and machine reading comprehension models (Ben Abacha et al., 2017, 2019b. Moreover, the extensive use of medical images (e.g., CT) and tables (e.g., laboratory examination) has improved results in real-world clinical scenarios, making the BQA a task lying at the intersection of Computer Vision (CV) and NLP. However, most BQA models focus on either texts or images (Lau et al., 2018;Ben Abacha et al., 2019a;He et al., 2020a); as a result, BQA datasets that are composed by fusing different biomedical resources are relatively limited.
Open-Domain Multi-Choice BQA Datasets
With rapidly increasing numbers of consumers asking health-related questions on online medical consultation websites, cMedQA (Zhang et al., 2017(Zhang et al., , 2018a, webMedQA (He et al., 2019) and ChiMed (Tian et al., 2019) exploit patient-doctor QA data to build consumer health QA datasets. However, the quality problems in such datasets are that the answers are written by online-doctors and the data itself has intrinsic noise. By contrast, medical licensing examinations, which are designed by human medical experts, often take the form of multi-choice questions, and contain a significant number of questions that require comprehensive biomedical knowledge and multiple reasoning ability. Such exams are the perfect data source to push the development of BQA systems. Several datasets have been released that exploit such naturally existing BQA data, which are summarized in Table 3. Collecting from the Spain public healthcare specialization examination, HEAD-QA (Vilares and Gómez-Rodríguez, 2019) contains multichoice questions from six biomedical categories, including Medicine, Pharmacology, Psychology, Nursing, Biology and Chemistry. NLPEC (Li et al., 2020) collects 21.7k multi-choice questions with human-annotated answers from the National Licensed Pharmacist Examination in China, but only a small number of sample data is available for public use.
Last but not least, clinical medicine, as one of the 24 categories in NMLEC, has been previously studied by MedQA (Zhang et al., 2018b) and MEDQA (Jin et al., 2020). However, the former did not release any data or code, and the latter only focused on clinical medicine with 34k questions in their cross-lingual studies, questions with images or tables were not included, and none of the remaining categories in MLEC-QA were studied. Medicine (TCM), and Chinese Western Medicine (CWM). After removing duplicated or incomplete questions (e.g., some options missing), there are 136,236 questions in MLEC-QA, and each question contains five candidate options with one correct/best option and four incorrect or partially correct options. We describe in detail the JSON data structure of MLEC-QA in Appendix B. MLEC-QA contains 1,286 questions with extra materials that provide additional information to answer correctly. As shown in Figure 1, the extra materials are all in a graphical format with various types, such as ECG, table of a patient's condition record, formula, CT, line graph, explanatory drawing, etc. We include these questions with extra materials in MLEC-QA to facilitate future BQA explorations on the crossover studies of CV and NLP, although we will not exploit them in this work due to the various specifics involved in extra materials. Basically, as shown in Table 2 and Table 4, the questions in MLEC-QA are divided into five types including:
• A1: single statement question; • B1: similar to A1, with a group of options shared in multiple questions;
• A2: questions accompanied by a clinical scenario; • A3: similar to A2, with information shared among multiple independent questions; • A4: similar to A3, with information shared among multiple questions, new information can be gradually added. We further classify these questions into Knowledge Questions (KQ) and Case Questions (CQ), where KQ (A1+B1) focus on the definition and comprehension of biomedical knowledge, while CQ (A2+A3/A4) require analysis and practical application for real-world medical scenarios. Both types of questions require multiple reasoning ability to answer. For the Train/Dev/Test split, randomly splitting may cause data imbalance because the number of the five question types are various from each other (e.g., A1 is far more than others). To ensure that the subsets have the same distribution of the question types, we split the data based on the question types, with 80% training, 10% development, and 10% test. The overall statistics of the MLEC-QA dataset are summarized in Table 5. We can see that the length of the questions and the vocabulary size in Clinic are larger than the rest of the subsets, explaining that clinical medicine may involve more medical subjects than other specialties.
Reasoning Types of the Questions
Since the annual examination papers are designed by a team of healthcare experts who try to follow the similar reasoning types distribution. To better understand our dataset, we manually inspected 10 sets of examination papers (2 sets for each subfield), and summarize the most frequent reasoning types of the questions from MLEC-QA and previous works (Lai et al., 2017;Zhong et al., 2020). The examples are shown in Table 6. Notably, the "Evidence" is well-organized by us to show how models need to handle these reasoning issues to achieve promising performance in MLEC-QA. The definition of reasoning types of the questions are as follows:
Lexical Matching This type of question is common and the simplest. The retrieved documents are highly matched with the question, the correct answer exactly matches a span in the document. As shown in the example, the model only needs to check which option is matched with.
Multi-Sentence Reading Unlike lexical matching, where questions and correct answers can be found within a single sentence, multi-sentence reading requires models reading multiple sentences to gather enough information to generate answers.
Concept Summary
The correct options for this type of question do not appear directly in the documents. It requires the model to understand and summarize the question relevant concepts after reading the documents. As shown in the example, the model needs to understand and summarize the relevant mechanism of "Thermoregulation", and infer that when an obstacle arises in thermoregulation, the body temperature will not be able to maintain a relatively constant level, that is, it will rise with the increase of ambient temperature.
Numerical Calculation This type of question involves logical reasoning and arithmetic operations related to mathematics. As shown in the example, the model first needs to judge the approximate age of month according to the height of the infant, and then reverse calculate the age of months according to the height formula of infants 7~12 months old to obtain the age in months: (68 -65) / 1.5 + 6 = 8.
Multi-Hop Reasoning
This type of question requires several steps of logical reasoning over mul-tiple documents to answer. As shown in the example, the patient's hemoglobin (HB) value is low, indicating that the patient has anemia, and the supply of iron should be increased in their diet. The model needs to compare the iron content of each option: the iron content of C, D and E is low and that of A, B is high, but B is not easily absorbed, so the best answer is A.
Reasoning Type Example ( * represents the correct answer)
Lexical Matching
The main hallmark of peritonitis is: A. Significant abdominal distension B. Abdominal mobility dullness C. Bowel sounds were reduced or absent D. Severe abdominal cramping * E. Peritoneal irritation signs Evidence:
The hallmark signs of peritonitis are peritoneal irritation signs, i.e., tenderness, muscle tension, and rebound tenderness.
Multi-Sentence Reading
Which is wrong in the following narrative relating to the appendix: A. The appendiceal artery is the terminal artery B. Appendiceal tissues contain abundant lymphoid follicles C. Periumbilical pain at appendicitis onset visceral pain * D. Resection of the appendix in adults will impair the body's immune function E. There are argyrophilic cells in the deep part of the appendiceal mucosa, which are associated with carcinoid tumorigenesis Evidence:
(1) The appendiceal artery is a branch of the ileocolic artery and is a terminal artery without collaterals;
(2) The appendix is a lymphoid organ[...]Therefore, resection of the adult appendix does not compromise the body's immune function; (3) The nerves of the appendix are supplied by sympathetic fibers[...]belonging to visceral pain; (4) Argyrophilic cells are found in the appendiceal mucosa and are the histological basis for the development of appendiceal carcinoids.
Concept Summary
The main hallmark of thermoregulatory disorders in hyperthermic environments is: A. Developed syncope B. Developed shock C. Dry heat of skin * D. Increased body temperature E. Decreased body temperature Evidence:
The purpose of thermoregulation is to maintain body temperature in the normal range. In hyperthermic environments, the thermoregulatory center is dysfunctional and cannot maintain the body's balance of heat production and heat dissipation, so the body temperature is increased by the influence of ambient temperature.
Numerical Calculation
A normal infant, weighing 7.5kg and measuring 68cm in length. Bregma 1.0cm, head circumference 44cm. Teething 4. Can sit alone and can pick up pellets with a hallux and forefinger. The most likely age of the infant is: * A. 8 months B. 24 months C. 18 months D. 12 months E. 5 months Evidence: A normal infant measured 65cm at 6 months and 75cm at 1 year of age. The infant's 7 to 12 month length is calculated as: length = 65 + (months of age -6) x 1.5.
Multi-Hop Reasoning
6-month-old female infant, artificial feeding mainly, physical examination revealed a low hemoglobin (HB) value, the dietary supplement that should be mainly added is: * A. Liver paste B. Egg yolk paste C. Tomato paste D. Rice paste E. Apple puree Evidence:
(1) Low HB value indicates anemia tendency. Iron deficiency anemia is the most important and common type of anemia in China.
(2) Iron supply should be increased in diet.
(3) Liver paste is rich in iron. (4) The iron content of egg yolk paste is lower than that of liver paste, and it is not easy to be absorbed. (5) The iron content of tomato paste, rice paste and apple puree is lower than that of liver paste.
Document Retriever
Both examination counseling books and Wikipedia have been used as the source of supporting materials in previous research (Zhong et al., 2020;Jin et al., 2020;Vilares and Gómez-Rodríguez, 2019). However, because examination counseling books are designed to help examinees pass the examination, knowledge is highly simplified and summarized; even the easily confused knowledge points are compared. Using examination counseling books as information sources may make the retriever-reader more likely to exploit shallow text matching, and complex reasoning is seldom involved.
Therefore, to help better understand the improvement coming from future models, we adopt Chinese Wikipedia dumps as our information sources, which contain a wealth of information (over 1 million articles) of real-world facts. Building upon the whole Chinese Wikipedia data, we use a distributed search and analytics engine, Elas-ticSearch, as the document store and document retriever, which supports very fast full-text searches. The similarity scoring function used in Elasticsearch is the BM25 algorithm (Robertson and Zaragoza, 2009), which measures the relevance of documents to a given search query. As defined in Appendix C, the larger this BM25 score, the stronger the relevance between document and query.
Specifically, for each question Q i and each candidate option O ij where j ∈ {A, B, C, D, E}, we define Q i O ij = Q i + O ij as a search query to Elasticsearch and is repeated for all options. The document with the highest BM25 score returned by each query is selected as supporting materials for the next stage machine reading comprehension task.
Control Methods
In general, each option should have the same correct rate for multi-choice questions, but in fact, the order in which the correct options appear is not completely random, and the more the number of options, the lower the degree of randomization (Poundstone, 2014). Given the complex nature of multi-choice tasks, we employ three control methods to ensure a fair comparison among various open-domain QA models.
Random A ′ = Random(O). For each question, an option is randomly chosen as the answer from five candidate options. We perform this experiment five times and average the results as the baseline of the Random method. B, C, D, E}. For each question, the j th option is always chosen as the answer to obtain the accuracy distribution of five candidate options. (O). Incorporating the previous experiences of NMLEC and multi-choice task work (Vilares and Gómez-Rodríguez, 2019), the Mixed method simulates how humans solving uncertain questions, and consists of the following three strategies: (1) the correct rate of choosing "All of the options above is correct/incorrect" is much higher than the other options. (2) Supposing the length of options is roughly equal, only one option is obviously longer with more detailed and specific descriptions, or is obviously shorter than the other options, then choose this option. (3) The correct option tends to appear in the middle of candidate options. The three strategies are applied in turn. If any strategy matches, then the option that matches the strategy is chosen as the answer.
Constant A ′ = Constant j (O), where j ∈ {A,Mixed A ′ = M ixed
Fine-Tuning Pre-Trained Language Models
We apply an unified framework UER-py (Zhao et al., 2019) to fine-tuning pre-trained language models on the machine reading comprehension task as our reader. We consider the following five pre-trained language models: Chinese BERT-Base (denoted as BERT-Base) and Multilingual Uncased BERT-Base (denoted as BERT-Base-Multilingual) (Devlin et al., 2019), Chinese BERT-Base with whole word masking and pre-trained over larger corpora (denoted as BERT-wwm-ext) (Cui et al., 2019), and the robustly optimized BERTs: Chinese RoBERTa-wwm-ext and Chinese RoBERTa-wwm-ext-large (Cui et al., 2019 is the sentence separator in pre-trained language models. We pass each of the five options in turn, and the model outputs the hidden state representation S ij ∈ R 1×H of the input sequence, then performs the classification and output an unnormalized log probability P ij ∈ R of each option O ij being correct by P ij = S ij W T , where W ∈ R 1×H is the weight matrix. Finally, we pass the unnormalized log probabilities of each option through a softmax layer and obtain the option with the highest probability as the predicted answer A ′ i .
Experiments
Experimental Settings
We conduct detailed experiments and analyses to investigate the performance of control methods and open-domain QA methods on MLEC-QA. As shown in Figure 2, we implement a two-stage retriever-reader framework: (1) a retriever first retrieves question relevant documents from Chinese Wikipedia using ElasticSearch, (2) and then a reader employs machine reading comprehension models to generate answers in given documents retrieved by the retriever. For the reader, all machine reading comprehension models are trained with 12 epochs, an initial learning rate of 2e-6, a maximum sequence length of 512, a batch size of 5. The parameters are selected based on the best performance on the development set, and we keep the default values for the other hyper-parameters (Devlin et al., 2019). We use accuracy as the metric to evaluate different methods, and provide baseline results, as well as human pass mark (60%) instead of human performance due to the wide variations exist in human performance, from almost full marks to cannot even pass the exam.
Retrieval Performance
The main drawbacks of the Chinese Wikipedia database in biomedicine are that it is not comprehensive and thorough, that is, it may not provide complete coverage of all subjects. To evaluate whether retrieved documents can cover enough evidence to answer questions, we sampled 5% (681) questions from the development sets of five categories using stratified random sampling, and manually annotate each question by five medical experts with 3 labels: (1) Exactly Match (EM): the retrieved documents exactly match the question.
(2) Partial Match (PM): the retrieved documents partially match the question, can be confused with the correct options or are incomplete.
(3) Mismatch (MM): the retrieved documents do not match the question at all. Table 7 lists the performance of the retrieval strategy as well as the results of the annotation for KQ and CQ questions on five subsets. From the table, we make the following observations. First, most retrieved documents indicate PM with the questions, while the matching rates of EM and MM achieve maximums of 20.83% (CWM) and 50% (PH), respectively. Second, the matching rate of CQ is higher than KQ in most subsets as CQ are usually related to simpler concepts, and use more words to describe questions, which leads to easier retrieval. By contrast, KQ usually involve more complex concepts that may not be included in the Chinese Wikipedia database. Therefore, the mismatching rate of KQ is significantly higher than that of CQ. Third, among different subsets, the performance in the subset Cli achieves the best as clinical medicine is more "general" to retrieve compare with other specialties. Whereas the performance in the subset PH achieves the worst as the Public Health is usually related to "confusing concepts", which leads to poor retrieval performance.
Baseline Results
Tables 8 and Figure 3 show the performance of baselines as well as the performance on KQ and CQ questions. As we can see, among control methods, the correct option has a slight tendency to appear in the middle (C and D) of candidate options, but the margins are small. The performance of the Mixed method is slightly better than a random guess, which indicates that the flexible use of guessing skills may add wings to the tiger as humans can exclude some certain wrong options, but if the cart before the horse is reversed, it is impossible to pass the exam only through opportunistic guessing. RoBERTa-wwm-ext-large and BERTwwm-ext perform better than other models on five subsets. However, even the best-performing model can only achieve accuracies between 40% to 55% on five subsets, so there is still a gap to pass the exams. Comparing the performance between KQ and CQ questions, most models achieve better performance on CQ, which is positively correlated with CQ's better retrieval performance. Among different subsets, the subset TCM is the easiest (54.95%) one to answer across the board, while the subset PH is the hardest (40.04%), which does not totally correspond to their retrieval performance as shown in Table 7. The possible reason is that the diagnosis and treatment of diseases in traditional Chinese medicine are characterized by "Homotherapy for Heteropathy", that is, treating different diseases with the same method, which may result in some patterns or mechanisms that can be used by the models to reach such results.
Comparative Analysis
Given that we use the Chinese Wikipedia database as our information sources and apply a two-stage retriever-reader framework, the reason for such poor baseline performance could come from both our information sources and the retriever-reader framework.
Information
Sources Both books and Wikipedia have been used as the information sources in previous research. One of our subsets, Clinic, has been studied by MEDQA (Jin et al., 2020) as a subset (MCMLE) for cross-lingual research. MEDQA uses 33 medical textbooks as their information sources and the evaluation result shows that their collected text materials can provide enough information to answer all the questions in MCMLE. We compare the best model (RoBERTa-wwm-ext-large) performance on both datasets as shown in Table 9. Notably, questions in MCMLE have four candidate options due to one of the wrong options being deleted. Therefore, the random accuracy on MCMLE is higher than ours.
From the results we can see that even with 100% covered materials, the best model can only achieve 16.88% higher accuracy on the test set than ours, which indicates that using Wikipedia as information sources is not that terrible compared with medical books, and the main reason for baseline performance may come from machine reading comprehension models that lack sophisticated reasoning ability.
Retriever-Reader
We also perform an experiment that sampled 5% (92) questions from the development set of Public Health, and manually annotate each question by a medical expert to determine whether that can exactly or partially match with the top K retrieved documents, as shown in Table 10. Notably, the actual number of retrieved documents is 5×K as we define Q i O ij = Q i +O ij as a search query and is repeated for all options.
From the results, we can see that more documents even bring more noise instead, as the best match documents have already been fetched in the top 1 documents. It indicates that the poor performance of machine reading comprehension models is coming from the insufficiency of reasoning ability rather than the number of retrieved documents.
Conclusion
We present the largest-scale Chinese multi-choice BQA dataset, MLEC-QA, which contains five biomedical subfields with extra materials (images or tables) annotated by human experts: Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Chinese Western Medicine. Such questions correspond to examinations (NMLEC) to access the qualifications of medical practitioners in the Chinese healthcare system, and require specialized domain knowledge and multiple reasoning abilities to be answered. We implement eight representative control methods and opendomain QA methods by a two-stage retrieverreader framework as baselines. The experimental results demonstrate that even the current best approaches cannot achieve good performance on MLEC-QA. We hope MLEC-QA can benefit researchers on improving the open-domain QA models, and also make advances for BQA systems. In order to benefit researchers on improving the open-domain QA models, and also make advances for Biomedical Question Answering (BQA) systems, we present MLEC-QA, the largest-scale Chinese multi-choice BQA dataset to date.
F.2 LANGUAGE VARIETY
The data is represented in simplified Chinese (zh-Hans-CN), and collect from the 2006 to 2020 NM-LEC, as well as practice exercises from the Internet.
F.3 SPEAKER DEMOGRAPHIC
Since the data is designed by a team of anonymous human healthcare experts, we are not able to directly reach them for inclusion in this dataset and thus could not be asked for demographic information. It is expected that most of the speakers come from China with professionals working in the area of biomedicine, and speak Chinese as a native language. No direct information is available about age and gender distribution.
F.4 ANNOTATOR DEMOGRAPHIC
The experiments involve annotations from 5 medical experts with at least have a master's degree and have passed the NMLEC. They ranged in age from 28 45 years, included 3 men and 2 women, all come from China and speak Chinese as a native language.
F.5 SPEECH SITUATION
All questions in MLEC-QA are collected from the National Medical Licensing Examination in China (NMLEC), which are carefully designed by human experts to evaluate professional knowledge and skills for those who want to be medical practitioners in China.
F.6 TEXT CHARACTERISTICS
The topics include in MLEC-QA are in 5 biomedical sub-fields: Clinic, Stomatology, Public Health, Traditional Chinese Medicine, and Traditional Chinese Medicine Combined with Western Medicine.
Figure 1 :
1Examples of extra materials.
Figure 2 :
2Overview of the two-stage retriever-reader framework on MLEC-QA.
Figure 3 :
3Performance in accuracy (%) on KQ (solid lines) and CQ (dashed lines) questions.
F. 7
7RECORDING QUALITY N/A.
CT of the head as shown in the figure. Which option is the correct diagnosis of the patient:[Question]
63
3
Babinski
CT
( )
Male, 63 years old. Had headache with vomiting after long-distance
running 3 hours ago. Physical Examination: not cooperative, som-
nolence, double pupillary light reflex exists, neck rigidity, free move-
ment of limbs, muscle tension slightly high. Bilateral Babinski sign
is evident, [Options]
A
/ Subarachnoid hemorrhage
B
/ Tumor of ventricle
C
/ Ventricular cyst
D
/ Ventricular hemorrhage
E
/ Choroidal calcification
[Answer]
D
Table 1 :
1An example of questions with additional images in MLEC-QA dataset.
Primipara, 29 years old, at 37 weeks of gestation. Had jet vomiting once this morning, suddenly convulsed an hour ago and then went to hospital in a coma. Physical examination: BP 180/120mmhg, urine protein (+++). The most likely diagnosis of this patient is: * A. Eclampsia B. Hematencephalon C. Hysteria D. Epilepsy E. Cerebral thrombosisQuestion Type
Example ( * represents the correct answer)
A1
(Public Health)
What should be used to compare the results of two samples?
A. T test B. x 2 test C. µ test D. F test * E. Rank sum test
B1
(Chinese Western Medicine)
A. Heart and liver B. Spleen and lungs C. Liver and kidneys
D. Heart and kidneys E. Spleen and kidneys
1. What is the viscera that "Yikui" in Yikui homologous refers to? (E)
2. What is the viscera that "water and fire" in harmonization of water
and fire refers to? (D)
A2
(Clinic)
A3
(Traditional Chinese Medicine)
Female, 28 years old. In the recent month, has oral ulcer attacks re-
peatedly, upset, difficulties to sleep at night, dry stool, defecates every
1-2 days, has dry mouth, does not like drinking water, and has yellow
urine, red tongue, greasy fur, and rapid pulse.
1. The drug of choice for the treatment of the pattern is:
A. Mirabilite B. Arctium lappa * C. Bamboo leaf
D. Baikal skullcap E. Gypsum
2. The appropriate compatible drug for the treatment of the disease is:
A. Angelica dahurica B. Cassia twig C. Rhizoma Zingiberis
* D. Rheum officinale E. Ash bark
3. Which of the following drugs should be used with caution during
menstruation?
A. Semen sojae praeparatum * B. Rheum officinale C. Coptis chinensis
D. Lophatherum gracile E. Rhizoma phragmitis
A4
(Stomatology)
A 50-year-old patient comes to the outpatient clinic 2 years after the end
of radiotherapy for nasopharyngeal carcinoma outside hospital. Exam-
ination: full mouth multiple teeth dental surfaces with different degrees
of caries, some of the affected teeth have become residual crowns, resid-
ual roots, less intraoral saliva, more soft scaling of the dental surfaces
and sulci.
1. The diagnosis of this patient is:
A. Acute caries * B. Rampant caries C. Chronic caries
D. Secondary caries E. Smooth surface caries
2. There are several treatment designs as follows, except:
A. Design treatment of full mouth caries B. Endodontic treatment of teeth
with hypodontia * C. Filling metal material
D. Remineralization adjunctive therapy E. Regular review
Table 2 :
2Examples of sub-fields and question types in MLEC-QA. The Chinese version is in Appendix D.
Table 3 :
3Comparison of MLEC-QA with existing open-
domain multi-choice BQA datasets. No* indicates a
small number of sample data is available. Extra indi-
cates if the dataset provides extra material to answer
questions.
3 MLEC-QA Dataset
3.1 Data Collection
We collect 155,429 multi-choice questions from
the 2006 to 2020 NMLEC and practice exercises
from the Internet. Except for the categories that
do not use Chinese in examinations, all categories
are included in MLEC-QA: Clinic (Cli), Stomatol-
ogy (Sto), Public Health (PH), Traditional Chinese
Table 4 :
4Statistics of question types in MLEC-QA, where "Extra" indicates number of questions with extra materials. Only A1, A2, and B1 are used in the examination of Chinese Western Medicine.
Table 5 :
5The overall statistics of MLEC-QA. Question/option length is calculated in characters. Vocabulary size is measured by Pkuseg(Luo et al., 2019) in words.
Table 6 :
6Examples of reasoning types of the questions in MLEC-QA. The Chinese version is in Appendix E. where Q i represents the i th Question, D i represents the collection of retrieved question relevant Documents, O i = {O iA , O iB , O iC , O iD , O iE } are the candidate Options, A i represents Answer, and we use A ′ i to denote the Predicted Answer.4 Methods
Notation We represent MLEC-QA task as:
(D, Q, O, A),
). Specifically, given the i th question Q i , retrieved question relevant documents D i , and a candidate option O ij , where j ∈ {A, B, C, D, E}. The input sequence for the framework is constructed by concatenating [CLS], tokens in D i , [SEP], tokens in Q i , [SEP], tokens in an option O ij , and [SEP], where [CLS] is the classifier token, and [SEP]
Table 7 :
7Matching rate (%) of retrieved documents that
exactly match, partial match or mismatch with the ques-
tions in the MLEC-QA dataset.
Table 8 :
8Performance of baselines in accuracy (%) on the MLEC-QA dataset.
Table 9 :
9Comparison of best model (RoBERTa-wwmext-large) performance on MEDQA and our MLEC-QA dataset.
Match 52.08 42.19 36.25 32.29 29.46Top K
1
2
3
4
5
Table 10 :
10Matching rate (%) of Top K retrieved documents that exactly or partial match with the questions in the Public Health subset.
Table 11 :
11Chinese version of Table 2.E Chinese version of examples of
reasoning types
Reasoning Type
Example ( * represents the correct answer)
Lexical
Matching
A.
B.
C.
D.
* E.
Evidence:
Multi-Sentence
Reading
A.
B.
C.
* D.
E.
Evidence:
(1)
(2)
[...]
(3)
[...]
(4)
Concept
Summary
A.
B.
C.
* D.
E.
Evidence:
Numerical
Calculation
7.5kg
68cm
1.0cm
44cm
4
* A. 8
B. 24
C. 18
D. 12
E. 5
Evidence:
6
65cm 1
75cm
7∼12
= 65 + (
-6) x 1.5
Multi-Hop
Reasoning
6
Hb
* A.
B.
C.
D.
E.
Evidence:
(1)Hb
(2)
(3)
(4)
(5)
Table 12 :
12Chinese version of Table 6. F Data Statement F.1 CURATION RATIONALE
http://www.nmec.org.cn/Pages/ArticleList-12-0-0-1.html
https://www.elastic.co/
AcknowledgementsWe thank the anonymous reviewers for their insightful comments and suggestions. This work is supported by the National Natural Science Foundation of China (NSFC No. 61972187).A Source of Data CollectionFor five subsets in MLEC-QA, we collect 2006 to 2020 Sprint Paper for the National Medical Licensing Examination -Tianjin Science and Technology Press in PDF format, and then converted them into digital format via Optical Character Recognition (OCR). We manually checked and corrected the OCR results with confidence less than 0.99 to ensure the quality of our dataset. We also scraped practice exercises from offcn (http://www.offcn.com/yixue/yszg/), which are freely accessible online for public usage.B Data StructureThe data structure below describe the JSON file representation in MLEC-QA.{"qid":The question ID, "qtype":["A1 ", "B1 ", "A2 ", "A3/A4 "], "qtext":Description of the question, "qimage":Image or table path (if any), "options":{ "A":Description of the option A, "B":Description of the option B, "C":Description of the option C, "D":Description of the option D, "E":Description of the option E }, "answer":["A", "B", "C", "D", "E"] }C BM25 Score FunctionThe BM25 algorithm is defined as:where q i is the i th query term of a query Q, f (q i , D) is q i 's term frequency in the document D, |D| is the length of the document D in words, and avgdl is the average document length in the text collection from which documents are drawn. b determines the effects of the length of the document on the average length. k1 is a variable which helps determine term frequency saturation characteristics. By default, b, k1 has a value of 0.75, 1.2 in Elasticsearch, respectively. IDF (q i ) is the Inverse Document Frequency (IDF) weight of the query term q i . It is usually computed as:where N is the total number of documents in the collection, and n(q i ) is the number of documents containing q i .
Overview of the medical question answering task at TREC. Asma Ben Abacha, Eugene Agichtein, Yuval Pinter, Dina Demner-Fushman, Asma Ben Abacha, Eugene Agichtein, Yuval Pinter, and Dina Demner-Fushman. 2017. Overview of the medical question answering task at TREC 2017
Liveqa, Proceedings of the Twenty-Sixth Text REtrieval Conference. the Twenty-Sixth Text REtrieval ConferenceGaithersburg, Maryland, USANIST500LiveQA. In Proceedings of the Twenty-Sixth Text REtrieval Conference, TREC 2017, Gaithersburg, Maryland, USA, November 15-17, 2017, volume 500-324 of NIST Special Publication. National In- stitute of Standards and Technology (NIST).
VQA-Med: Overview of the medical visual question answering task at ImageCLEF. Asma Ben Abacha, Sadid A Hasan, V Vivek, Joey Datla, Dina Liu, Henning Demner-Fushman, Müller, Working Notes of CLEF 2019 -Conference and Labs of the Evaluation Forum. Lugano, Switzerland2380CEUR Workshop Proceedings. CEUR-WS.orgAsma Ben Abacha, Sadid A. Hasan, Vivek V. Datla, Joey Liu, Dina Demner-Fushman, and Henning Müller. 2019a. VQA-Med: Overview of the med- ical visual question answering task at ImageCLEF 2019. In Working Notes of CLEF 2019 -Conference and Labs of the Evaluation Forum, Lugano, Switzer- land, September 9-12, 2019, volume 2380 of CEUR Workshop Proceedings. CEUR-WS.org.
Bridging the gap between consumers' medication questions and trusted answers. Asma Ben Abacha, Yassine Mrabet, Mark Sharp, Travis R Goodwin, Sonya E Shooshan, Dina Demner-Fushman, MEDINFO 2019: Health and Wellbeing e-Networks for All -Proceedings of the 17th World Congress on Medical and Health Informatics. Lyon, FranceIOS Press264Asma Ben Abacha, Yassine Mrabet, Mark Sharp, Travis R. Goodwin, Sonya E. Shooshan, and Dina Demner-Fushman. 2019b. Bridging the gap be- tween consumers' medication questions and trusted answers. In MEDINFO 2019: Health and Wellbeing e-Networks for All -Proceedings of the 17th World Congress on Medical and Health Informatics, Lyon, France, 25-30 August 2019, volume 264 of Studies in Health Technology and Informatics, pages 25-29. IOS Press.
AskHERMES: An online question answering system for complex clinical questions. Yonggang Cao, Feifan Liu, Pippa Simpson, Lamont D Antieau, Andrew S Bennett, James J Cimino, John W Ely, Hong Yu, J. Biomed. Informatics. 442Yonggang Cao, Feifan Liu, Pippa Simpson, Lamont D. Antieau, Andrew S. Bennett, James J. Cimino, John W. Ely, and Hong Yu. 2011. AskHERMES: An online question answering system for complex clin- ical questions. J. Biomed. Informatics, 44(2):277- 288.
Reading Wikipedia to answer opendomain questions. Danqi Chen, Adam Fisch, Jason Weston, Antoine Bordes, 10.18653/v1/P17-1171Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaLong Papers1Association for Computational LinguisticsDanqi Chen, Adam Fisch, Jason Weston, and Antoine Bordes. 2017. Reading Wikipedia to answer open- domain questions. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1870- 1879, Vancouver, Canada. Association for Computa- tional Linguistics.
Yiming Cui, W Che, T Liu, B Qin, Ziqing Yang, S Wang, G Hu, Pre-Training with Whole Word Masking for Chinese BERT. ArXivYiming Cui, W. Che, T. Liu, B. Qin, Ziqing Yang, S. Wang, and G. Hu. 2019. Pre-Training with Whole Word Masking for Chinese BERT. ArXiv.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Applying deep matching networks to Chinese medical question answering: A study and a dataset. Junqing He, Mingming Fu, Manshu Tu, 10.1186/s12911-019-0761-8BMC Medical Informatics and Decision Making. 19252Junqing He, Mingming Fu, and Manshu Tu. 2019. Applying deep matching networks to Chinese med- ical question answering: A study and a dataset. BMC Medical Informatics and Decision Making, 19(2):52.
PathVQA: 30000+ questions for medical visual question answering. CoRR, abs. Xuehai He, Yichen Zhang, Luntian Mou, Eric P Xing, Pengtao Xie, Xuehai He, Yichen Zhang, Luntian Mou, Eric P. Xing, and Pengtao Xie. 2020a. PathVQA: 30000+ ques- tions for medical visual question answering. CoRR, abs/2003.10286.
Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition. Yun He, Ziwei Zhu, Yin Zhang, Qin Chen, James Caverlee, 10.18653/v1/2020.emnlp-main.372Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsYun He, Ziwei Zhu, Yin Zhang, Qin Chen, and James Caverlee. 2020b. Infusing Disease Knowledge into BERT for Health Question Answering, Medical Inference and Disease Name Recognition. In Pro- ceedings of the 2020 Conference on Empirical Meth- ods in Natural Language Processing (EMNLP), pages 4604-4614, Online. Association for Compu- tational Linguistics.
Di Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, arXiv:2009.13081Hanyi Fang, and Peter Szolovits. 2020. What Disease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams. csDi Jin, Eileen Pan, Nassim Oufattole, Wei-Hung Weng, Hanyi Fang, and Peter Szolovits. 2020. What Dis- ease does this Patient Have? A Large-scale Open Domain Question Answering Dataset from Medical Exams. arXiv:2009.13081 [cs].
PubMedQA: A dataset for biomedical research question answering. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, Xinghua Lu, 10.18653/v1/D19-1259Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsQiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 2567- 2577, Hong Kong, China. Association for Computa- tional Linguistics.
Qiao Jin, Zheng Yuan, Guangzhi Xiong, Qianlan Yu, Chuanqi Tan, Mosha Chen, arXiv:2102.05281Songfang Huang, Xiaozhong Liu, and Sheng Yu. 2021. Biomedical Question Answering: A Comprehensive Review. csQiao Jin, Zheng Yuan, Guangzhi Xiong, Qianlan Yu, Chuanqi Tan, Mosha Chen, Songfang Huang, Xi- aozhong Liu, and Sheng Yu. 2021. Biomedical Question Answering: A Comprehensive Review. arXiv:2102.05281 [cs].
RACE: Large-scale ReAding comprehension dataset from examinations. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, Eduard Hovy, 10.18653/v1/D17-1082Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkAssociation for Computational LinguisticsGuokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale ReAding comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empiri- cal Methods in Natural Language Processing, pages 785-794, Copenhagen, Denmark. Association for Computational Linguistics.
A dataset of clinically generated visual questions and answers about radiology images. Jason J Lau, Soumya Gayen, Asma Ben Abacha, Dina Demner-Fushman, Scientific Data. 51180251Jason J. Lau, Soumya Gayen, Asma Ben Abacha, and Dina Demner-Fushman. 2018. A dataset of clini- cally generated visual questions and answers about radiology images. Scientific Data, 5(1):180251.
Towards medical machine reading comprehension with structural knowledge and plain text. Dongfang Li, Baotian Hu, Qingcai Chen, Weihua Peng, Anqi Wang, 10.18653/v1/2020.emnlp-main.111Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsDongfang Li, Baotian Hu, Qingcai Chen, Weihua Peng, and Anqi Wang. 2020. Towards medical machine reading comprehension with structural knowledge and plain text. In Proceedings of the 2020 Con- ference on Empirical Methods in Natural Language Processing (EMNLP), pages 1427-1438, Online. As- sociation for Computational Linguistics.
PKUSEG: A toolkit for multi-domain chinese word segmentation. CoRR, abs. Ruixuan Luo, Jingjing Xu, Yi Zhang, Xuancheng Ren, Xu Sun, Ruixuan Luo, Jingjing Xu, Yi Zhang, Xuancheng Ren, and Xu Sun. 2019. PKUSEG: A toolkit for multi-domain chinese word segmentation. CoRR, abs/1906.11455.
Answering clinical questions with role identification. Yun Niu, Graeme Hirst, Gregory Mcarthur, Patricia Rodriguez-Gianolli, 10.3115/1118958.1118968Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine. the ACL 2003 Workshop on Natural Language Processing in BiomedicineSapporo, JapanAssociation for Computational LinguisticsYun Niu, Graeme Hirst, Gregory McArthur, and Patri- cia Rodriguez-Gianolli. 2003. Answering clinical questions with role identification. In Proceedings of the ACL 2003 Workshop on Natural Language Processing in Biomedicine, pages 73-80, Sapporo, Japan. Association for Computational Linguistics.
emrQA: A large corpus for question answering on electronic medical records. Anusri Pampari, Preethi Raghavan, Jennifer Liang, Jian Peng, 10.18653/v1/D18-1258Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAnusri Pampari, Preethi Raghavan, Jennifer Liang, and Jian Peng. 2018. emrQA: A large corpus for question answering on electronic medical records. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2357-2368, Brussels, Belgium. Association for Computational Linguistics.
Rock Breaks Scissors. Little Brown &. William Poundstone, CoWilliam Poundstone. 2014. Rock Breaks Scissors. Lit- tle Brown & Co.
The Probabilistic Relevance Framework: BM25 and Beyond. Stephen Robertson, Hugo Zaragoza, Foundations and Trends in Information Retrieval. 34Stephen Robertson and Hugo Zaragoza. 2009. The Probabilistic Relevance Framework: BM25 and Be- yond. Foundations and Trends in Information Re- trieval, 3(4):333-389.
A knowledge based method for the medical question answering problem. Rafael M Terol, Patricio Martínez-Barco, Manuel Palomar, Comput. Biol. Medicine. 3710Rafael M. Terol, Patricio Martínez-Barco, and Manuel Palomar. 2007. A knowledge based method for the medical question answering problem. Comput. Biol. Medicine, 37(10):1511-1521.
ChiMed: A Chinese medical corpus for question answering. Yuanhe Tian, Weicheng Ma, Fei Xia, Yan Song, 10.18653/v1/W19-5027Proceedings of the 18th BioNLP Workshop and Shared Task. the 18th BioNLP Workshop and Shared TaskFlorence, ItalyAssociation for Computational LinguisticsYuanhe Tian, Weicheng Ma, Fei Xia, and Yan Song. 2019. ChiMed: A Chinese medical corpus for ques- tion answering. In Proceedings of the 18th BioNLP Workshop and Shared Task, pages 250-260, Flo- rence, Italy. Association for Computational Linguis- tics.
George Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopoulos, Yannis Almirantis, John Pavlopoulos, Nicolas Baskiotis, Patrick Gallinari, Thierry Artières, Axel-Cyrille Ngonga Ngomo. Norman Heino, Éric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulosand Georgios PaliourasGeorge Tsatsaronis, Georgios Balikas, Prodromos Malakasiotis, Ioannis Partalas, Matthias Zschunke, Michael R. Alvers, Dirk Weissenborn, Anastasia Krithara, Sergios Petridis, Dimitris Polychronopou- los, Yannis Almirantis, John Pavlopoulos, Nico- las Baskiotis, Patrick Gallinari, Thierry Artières, Axel-Cyrille Ngonga Ngomo, Norman Heino, Éric Gaussier, Liliana Barrio-Alvers, Michael Schroeder, Ion Androutsopoulos, and Georgios Paliouras. 2015.
An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competition. BMC Bioinform. 1628An overview of the BIOASQ large-scale biomedical semantic indexing and question answering competi- tion. BMC Bioinform., 16:138:1-138:28.
HEAD-QA: A healthcare dataset for complex reasoning. David Vilares, Carlos Gómez-Rodríguez, 10.18653/v1/P19-1092Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsDavid Vilares and Carlos Gómez-Rodríguez. 2019. HEAD-QA: A healthcare dataset for complex rea- soning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 960-966, Florence, Italy. Association for Computational Linguistics.
The TREC-8 question answering track. Ellen M Voorhees, Dawn M Tice, Proceedings of the Second International Conference on Language Resources and Evaluation (LREC'00). the Second International Conference on Language Resources and Evaluation (LREC'00)Athens, GreeceEuropean Language Resources Association (ELRAEllen M. Voorhees and Dawn M. Tice. 2000. The TREC-8 question answering track. In Proceed- ings of the Second International Conference on Language Resources and Evaluation (LREC'00), Athens, Greece. European Language Resources As- sociation (ELRA).
Automatic clinical question answering based on UMLS relations. Weiming Wang, Dawei Hu, Min Feng, Liu Wenyin, Third International Conference on Semantics, Knowledge and Grid. Xian, Shan Xi, ChinaIEEE Computer SocietyWeiming Wang, Dawei Hu, Min Feng, and Liu Wenyin. 2007. Automatic clinical question answering based on UMLS relations. In Third International Con- ference on Semantics, Knowledge and Grid, Xian, Shan Xi, China, October 29-31, 2007, pages 495- 498. IEEE Computer Society.
Development, implementation, and a cognitive evaluation of a definitional question answering system for physicians. Hong Yu, Minsuk Lee, David R Kaufman, John W Ely, Jerome A Osheroff, George Hripcsak, James J Cimino, J. Biomed. Informatics. 403Hong Yu, Minsuk Lee, David R. Kaufman, John W. Ely, Jerome A. Osheroff, George Hripcsak, and James J. Cimino. 2007. Development, imple- mentation, and a cognitive evaluation of a defini- tional question answering system for physicians. J. Biomed. Informatics, 40(3):236-251.
Multi-scale attentive interaction networks for chinese medical question answer selection. S Zhang, X Zhang, H Wang, L Guo, S Liu, IEEE Access. 6S. Zhang, X. Zhang, H. Wang, L. Guo, and S. Liu. 2018a. Multi-scale attentive interaction networks for chinese medical question answer selection. IEEE Access, 6:74061-74071.
Chinese medical question answer matching using end-to-end characterlevel multi-scale CNNs. Sheng Zhang, Xin Zhang, Hui Wang, Jiajun Cheng, Pei Li, Zhaoyun Ding, Applied Sciences. 78767Sheng Zhang, Xin Zhang, Hui Wang, Jiajun Cheng, Pei Li, and Zhaoyun Ding. 2017. Chinese medical ques- tion answer matching using end-to-end character- level multi-scale CNNs. Applied Sciences, 7(8):767.
Medical exam question answering with large-scale reading comprehension. Xiao Zhang, Ji Wu, Zhiyang He, Xien Liu, Ying Su, Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18). the Thirty-Second AAAI Conference on Artificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18)New Orleans, Louisiana, USAAAAI PressXiao Zhang, Ji Wu, Zhiyang He, Xien Liu, and Ying Su. 2018b. Medical exam question answering with large-scale reading comprehension. In Proceed- ings of the Thirty-Second AAAI Conference on Ar- tificial Intelligence, (AAAI-18), the 30th innovative Applications of Artificial Intelligence (IAAI-18), and the 8th AAAI Symposium on Educational Advances in Artificial Intelligence (EAAI-18), New Orleans, Louisiana, USA, February 2-7, 2018, pages 5706- 5713. AAAI Press.
UER: An open-source toolkit for pretraining models. Zhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, Xiaoyong Du, 10.18653/v1/D19-3041Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing. the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language ProcessingHong Kong, ChinaAssociation for Computational LinguisticsSystem DemonstrationsZhe Zhao, Hui Chen, Jinbin Zhang, Xin Zhao, Tao Liu, Wei Lu, Xi Chen, Haotang Deng, Qi Ju, and Xiaoy- ong Du. 2019. UER: An open-source toolkit for pre- training models. In Proceedings of the 2019 Con- ference on Empirical Methods in Natural Language Processing and the 9th International Joint Confer- ence on Natural Language Processing (EMNLP- IJCNLP): System Demonstrations, pages 241-246, Hong Kong, China. Association for Computational Linguistics.
The Thirty-Second Innovative Applications of Artificial Intelligence Conference. Haoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, Maosong Sun, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence. New York, NY, USA2020The Thirty-Fourth AAAI Conference on Artificial IntelligenceHaoxi Zhong, Chaojun Xiao, Cunchao Tu, Tianyang Zhang, Zhiyuan Liu, and Maosong Sun. 2020. JEC- QA: A Legal-Domain Question Answering Dataset. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Inno- vative Applications of Artificial Intelligence Confer- ence, IAAI 2020, The Tenth AAAI Symposium on Ed- ucational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020., pages 9701-9708.
Question answering in biomedicine. Pierre Zweigenbaum, Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics. the 10th Conference of the European Chapter of the Association for Computational LinguisticsPierre Zweigenbaum. 2003. Question answering in biomedicine. In Proceedings of the 10th Conference of the European Chapter of the Association for Com- putational Linguistics. |
1,693,829 | Reduplication across Categories in Cantonese | This paper investigates the formal semantics of reduplication in Cantonese, i.e. how the meaning of reduplicated forms are encoded and computed with the given meaning from the base forms. In particular, this paper argues that reduplication denotes a summation function that adds up arguments (be they object-, event-or degreearguments) and return a collection of the elements. The surface difference across categories is accounted for in terms of cumulativity and quantization(Krifka, 1998;Krifka, 2001;Rothstein, 2004). The present approach makes use of scalar structure and summation as formal tools to model the cross-categorial behaviour of reduplication. It provides the advantage of a unified theory for lexical composition across categories nouns, verbs and adjectives. | [
12590683
] | Reduplication across Categories in Cantonese
Charles Lam charleslam@purdue.edu
Linguistics Program
Purdue University Beering Hall
1289, 47907Room, West LafayetteIN
Reduplication across Categories in Cantonese
reduplicationformal semanticscumulativitycross-categorial behaviour
This paper investigates the formal semantics of reduplication in Cantonese, i.e. how the meaning of reduplicated forms are encoded and computed with the given meaning from the base forms. In particular, this paper argues that reduplication denotes a summation function that adds up arguments (be they object-, event-or degreearguments) and return a collection of the elements. The surface difference across categories is accounted for in terms of cumulativity and quantization(Krifka, 1998;Krifka, 2001;Rothstein, 2004). The present approach makes use of scalar structure and summation as formal tools to model the cross-categorial behaviour of reduplication. It provides the advantage of a unified theory for lexical composition across categories nouns, verbs and adjectives.
Introduction
Reduplication is found across syntactic categories noun, verb and adjective in Cantonese. They all share a similar surface order, but the interpretation can be quite differently. Nominal reduplication denotes an exhaustive list such as 'everybody, every apple'. Verbal reduplication displays either durative or iterative reading, depending on the telicity of the verbal predicate. Adjectival reduplication shows a hedging and diminutive reading, as in 'a little fat' or 'reddish'
The goal of this paper is to establish a unified account for the cross-categorial reduplication that can interpret the various meanings. We argue that the common thread behind these interpretations is summation. Building on the notions of cumulativ-ity and quantization, the interpretations of reduplication are predictable.
In what follows, section 2 lists out the distribution and characteristics of reduplication in Cantonese. Section 3 reviews previous studies and points out that they cannot account for the behaviour of reduplication across categories. Section 4 discusses the formal property of cumulativity (Krifka, 1998;Rothstein, 2004), which provides a basis to account for the surface differences across categories. To test the hypothesis, section 5 provides the details of the proposal and shows how various interpretations can be handled by the present cumulativity analysis. Section 6 discusses the advantage of this approach and also the theoretical implications.
Data
This section makes a few observations on reduplication in Cantonese. We will first focus on adjectives, then include nouns and verbs, which share a similar surface pattern. Consider the sentence (1), which provides a reduplicated adjectives denote a sense of hedging or diminution.
(1) keoi5 3sg gou1 tall gou1 tall dei2 Prt 'S/he is fairly tall.'
Uttering (1) means that the person is considered tall, but probably not the tallest person or not even 'very tall'. This can be seen in (2), which is infelicitous unless it is otherwise specified that all other members of the group are simply short.
(2) keoi5 3sg gou1 tall gou1 tall dei2, Prt, so2ji5 therefore keoi5 3sg zeoi3 SUPERLATIVE gou1 tall '#S/he is fairly tall, so s/he is the tallest.'
The reduplicated adjective form with the particle gou1 gou1 dei2 is in complementary distribution with (3), where an overt marker shows the magnitude of tallness. This requirement of degree marker in (3) is well-documented, see Grano (2011) for a recent discussion of its syntax and semantics.
(3) keoi5 3sg *(hou2 / gei2) very / fairly gou1 tall 'S/he is very / fairly tall.' Third, adjective reduplication shows an interesting parallelism on the surface with nominal (4) and verbal (5) The data above show that reduplication can apply to lexical categories (i.e. nouns, verbs and adjectives). This parallelism is not unique to Cantonese: Chakraborty and Bandyopadhyay (2009) also report that reduplication in Bengali can denote repetition (e.g. 'every year'), plurality (e.g. 'the houses'), emphasis (e.g. 'deep red rose') and imperfective verbs (e.g. 'Talking about something, suddenly he stopped.'), together with a few other meanings. It is therefore plausible that reduplication denotes some function that is more generic and applicable to different elements. This paper does not attempt to account for cross-linguistic data, but instead focuses on Cantonese. The working hypothesis is that reduplicated forms have a common semantic thread between them, and that that common thread is summation. What the summation function does is 'add up' atomic elements into a collection. Reduplicated nouns denote an exhaustive group. For example, (4) refers to a group of children, which is equivalent to 'every single child' in English. Reduplicated verbs denote a durative event, as in tai2 tai2 ha5 syu1 'reading (books)' in (5). An interesting feature is that the predicates denoted by reduplicated verbs must be an atelic event, which in turn suggests that reduplicated verbs denote a collection of homogeneous subevents, following the assumption that atelic events have 'subevental properties' (Bennett and Partee, 1972;Krifka, 2001). This paper applies the existing analysis of cumulativity to reduplication in the nominal and verbal domain and further extends the analysis to adjectival reduplication. We thus hypothesize the following:
(6) Reduplication in Cantonese denotes a summation function.
The hypothesis in (6) predicts that the result of the function is always a sum of the input. If the result of the reduplication does not denote a sum or total of the given input, one may claim that hypothesis (6) is falsified.
Previous studies 3.1 The complex nature of adjectives
In general, the denotation of adjectives or properties can be decomposed into semantic functions of dimension, scale and degree. A dimension is a choice of measurement, such as height or weight. Scale is an linear ordered set within the same dimension, such as tall or short for the same dimension of height and heavy or light for weight. A degree specifies a point along the scale. The degree can bear specific value, as in full or empty in English. For example, whenever a speaker perceives the water level in a cup to reach the maximum value (i.e. 100%), then it would be felicitous to say The cup is full. However, a degree can also bear a fuzzy value, which may vary depending on the context. For instance, one would have very different standard of 'being tall' for John is tall and The Willis tower is tall. This decomposed adjective phrase analysis is also known as the DegP-shell analysis. Based on this analysis (Xiang, 2005;Grano and Kennedy, 2012), this paper assumes that adjective phrases are internally complex. In terms of syntax, there are multiple heads within the traditional AdjP.
Cross-categorial reduplication in various languages
There is little discussion specifically on adjectival reduplication in the literature. Although adjectival reduplication is attested in many other languages, e.g. Basque (De Rijk and De Coene, 2008), Bengali (Chakraborty and Bandyopadhyay, 2009) and a handful of others (Rubino, 2011), little attention is put on its formal semantic properties. Regier (1994) does provide a good summary of what reduplication can mean in various languages, but does not include Cantonese. A recent study on adjective reduplication in Mandarin (Liu, 2012) provides an informal pragmatic account of some restrictions on adjectives that can undergo reduplication. Liu's account, like other works cited in this section, adopts an informal cognitive grammar approach to the issue. Also, Liu did not attempt to handle reduplication in nouns and verbs, thus the present analysis differs from Liu's analysis both in terms of the formal approach and scope of study.
Based on crosslinguistic data, Abraham (2005) suggests 'divisibility' as a criterion for base forms that undergo reduplication. He generalizes that reduplicated forms always denote predicates that are divisible, so these divisible predicates must always be a collection of some elements. Abraham (2005) also notes that this generalization would contradict the empirical data that some reduplication forms actually denote diminutive adjectives. Kouwenberg and LaCharité (2005) address the apparent contradiction of diminutive or 'approximative' interpretation of reduplication and suggest that the diminutive reading is an extension from a dispersive interpretation. That means a diminutive reading of 'yellowish' would come from a dispersive 'yellow-spotted'. 'Dispersive' means that multiple instances of yellow-ness, such as spots or stains, are spread over or dispersed. This reading can therefore be construed as multiple instances of 'yellow'. For Kouwenberg and LaCharité (2005), this is a connecting context where reduplication first bears plurality. This reading can be extended to diminution, in the sense that yellowness is spread over the entity in a diluted way, instead of being individual spots or patches. However, such an account does not constrain when a language or an expression can perform semantic extension from dispersive to diminutive. It does work well for reduplication of colour adjectives in Cantonese, but not with adjectives of size and shape. For example, reduplication of predicates such as 'big' or 'tall' can never bear any dispersive reading, because the property of 'big' or 'tall' always predicates over the whole entity, not part of it. Kouwenberg and LaCharité (2005)'s theory of extension relies on the dispersive reading extending to diminution. Therefore, it cannot account for size and shape adjectives bearing diminutive reading, because the dispersive reading is impossible for size and shape adjectives. This paper takes the intuition that reduplication denotes a sense of multiplicity of elements, but does not assert that diminution comes from dispersion. Instead, this paper suggests that both diminutive and dispersive readings are the result of summation, as will be further discussed in section 5.3.
The theory in Abraham (2005) and Kouwenberg and LaCharité (2005) treats iconicity as a central property in reduplication. This paper takes the multiplication as an intuition that there is a summation process. In sections 4 and 5, we show that the formal properties of cumulativity and quantization can resolve the apparent contradiction that the same predicate can denote the sum of a collection and a subpart of the same collection.
Cumulativity and Quantization
The central claim of this paper is that summation is the underlying common thread in reduplication. Before we move on to the implementation, it is crucial to understand that cumulativity and quantization have a direct impact on the result of the elements undergoing summation. This section sets up the background of cumulativity in the literature on nouns and verbs. Krifka (2001) defines cumulativity as the following:
(7) A predicate P is cumulative iff (i) ∀x, y[P (x) ∧ P (y) → P (x ⊕ y)] (ii) ∃x, y[P (x) ∧ P (y) ∧ ¬x = y]]
Condition (i) means when two entities x and y are added together, they can still be described with the same denotation. Condition (ii) ensures x and y are distinct elements. For example, in a situation when Mary said she saw that John left, and Jane also said she saw that John left, we cannot infer that John left twice or conclude that 'leaving' is cumulative. The reason is that 'John' in the two utterances is the same person. Cumulative predicates include 'wine' or 'apples' 2 . If an object x called 'wine' is added to another object, which is distinct from x, let's call it y, which is also wine, we have a new object containing x and y. Since we can reasonably describe this new object as 'wine' (as opposed to 'two wines', which is possible in a different context), we can conclude that 'wine' denotes a cumulative predicate. Typically, but not necessarily, cumulative predicates include mass nouns and count plural nouns. As noted in Rothstein (2004), there are several 'exceptions' like line, sequence or fence in English, which we will not discuss in detail here.
The same characteristic can be applied to verbal predicates. Atelic predicates are typically cumulative. Take run in English as an example. Two distinct instances of run, when put together, can also be described as 'run'. Likewise, putting John built houses last year and John builds houses this year together, one can still describe the whole event as 'building houses'.
In contrast, count nouns with number marking are characterized by the property of quantization. Krifka (2001) defines quantized predicates as:
(8) A predicate P is quantized iff ∀x, y[P (x) ∧ P (y) → ¬y < x]]
Using the example in Krifka (1998), if an element x can be called '3 apples', then it is impossible for any proper subset of x to be described as '3 apples'. This captures our intuition that part of '3 apples' can be an apple, or two apples, but not three apples. Similarly, a proper part of quantized events cannot be identical to its superset. If we say 'John made four cakes.', the part of the event, for example John making one cake, cannot be described as 'John made four cakes.' It shows that the verbal predicate 'make four cakes' is quantized.
The notions of cumulativity and quantization capture the human understanding of collective entities. There are two important messages. Firstly, summation of two elements (more precisely, two predicates) can often result in an element with same denotation, as seen in cumulative predicates 'apples' or 'run'. Note that the count-mass distinction is linguistic, which means that the encoding of whether a noun is count or mass is independent from ontology. For example, nouns like 'furniture' or 'footwear' are considered mass because they do not co-occur with numerals or the singular indefinite article, as in '*four furniture' or '*a footwear'. Second, the count-mass distinction is language-specific, meaning that an entity denoted by a mass noun in one language can be count in another language.
Summation as a common thread
The previous section shows that the behaviour or the interpretation of the sum (i.e. the returned value as a result of summation) can be used to indicate cumulativity and quantization. The denotation of 'SUM' in (9) is essentially 'every' in English (Heim and Kratzer, 1998).
(9) SUM = λf ∈ D . ∀x ∈ D → f(x) = 1
This ensures all the individuals x are included in the sum D. From this formalization, we can see that whenever the sum shares the same denotation as its atomic elements, then we can see that the atomic elements must be cumulative. (e.g. 'some water' can be a collection of 'some water'). On the contrary, if a collection does not share the same denotation as its atoms (e.g. 'a chair' cannot have 'a chair' as its proper subset), the elements are quantized.
This section shows the implementation of summation in reduplication across the categories noun, verb and adjective in Cantonese.
Nouns
Cantonese nominals in general require classifiers on top of the noun. Nominal reduplication always applies to the classifier, as in (10). In (10) and (11), both the classifier and noun are present. The crucial difference between (10) and (11) is that the former reduplicates its classifier, which is acceptable, and the latter reduplicates its noun, which is unacceptable. Sentence (12) shows reduplication of the noun without a classifier, which is also unacceptable.
(10) go2 CL go2 CL sai3lo6 child dou1 DISTR hou2 very lek1 smart 'Every child is very smart.' (11) *go2 CL sai3lo6 child sai3lo6 child dou1 DISTR hou2 very lek1 smart Intended: 'Every child is very smart.' (12) *sai3(lo6) child sai3lo6 child dou1 DISTR hou2 very lek1 smart Intended: 'Every child is very smart.'
The data show that Cantonese reduplication applies only to classifiers, but not nouns 3 . Both the classifier and the noun must be present and whenever there is reduplication, it must apply to the classifier 4 .
There are a few apparent exceptions to the generalization that reduplication always applies to the classifier, such as jat6 jat6 'day-day -every day', jan4 jan4 'person-person -everybody', dou6 dou6 'place-place -everywhere', where there is no classifier present. However, one can also observe that these nouns behave differently from other common nouns in other contexts. For example, these nouns can cooccur with numerals without any classifiers, as shown in (13). Also, exceptions like jat6 'day' or nin4 'year' are measurement units of time, which can never occur with classifiers (*saam1 go3 jat6 'three-classifier-day' would be unacceptable for '3 days'). We can therefore treat these nouns as if they already carry the functional feature that classifiers add to common nouns. This observation conforms with (Zhang, 2013)'s view that individuation is not exclusively expressed by classifiers and bare nouns in classifier-languages can denote countable objects.
(13) keoi5 3sg heoi5 go zo2 Perf
hon4gwok3 Korea sap6 ten (*go3) CL jat1 day 'S/he went to Korea for ten days.'
Now with well-formed reduplication like (10), we can see that each single member 'child' in the group 'every child' is quantized, but not cumulative, because a proper subpart of a child would not qualify as a child, i.e. one cannot reasonably call a subpart, say the shoulder of a child, 'a child'. Formally:
(14) SUM (child) = λf ∈ D . ∀x ∈ D → f(x)=1 = ∀x ∈ D → child(x)=1
The phrase 'every child' is true (truth value=1), iff each of the members in the domain D is a child. The Cantonese phrase go3 sai6lo6 'a child' (before reduplication) works the same way as its English counterpart. Since the phrase go3 sai6lo6 'a about Cantonese classifier reduplication. While both Cantonese and Mandarin use classifiers in their nominals, only Cantonese allows classifier reduplication. 4 For independent reason, presumably phonological, Cantonese reduplication often takes one syllable. The unacceptability of (12) shows that a partial reduplication (i.e. reduplicating only the first syllable) would not make the utterance acceptable. child' is quantized, we predict that a summation of such elements would result in a quantized entity. This prediction is borne out in the data. To see this, let us focus on the individual member first. Since the utterance (10) denotes an exhaustive group of 'every child', it means that the predicate 'very smart' would apply to each of the individual members. The interpretation is also supported by the self-contradiction in the utterance in (15). Since (15) is not acceptable, we can infer that the reduplicated noun must denote every single member of 'the children'.
(15) #go2 CL go2 CL sai3lo6 child dou1 DISTR hou2 very lek1, smart dan6hai6 but jau1 EXIST jat1 one go3 CL m4 Neg lek1
smart '#Every child is very smart, but one of them is not.'
As predicted for the reduplicated form denoting 'every child', we can also observe the predicted result of a quantized entity. A proper subset of 'every child' cannot be also described as 'every child', for the reason that if a set y is the proper subset of a larger set x, y is necessarily smaller and thus does not include at least one of the members in x. Thus it is impossible to describe set y with the same denotation of x and we can conclude that the reduplicated noun phrase denotes a quantized predicate as well.
Verbs
Verbal reduplication in Cantonese shows a different pattern than nominals. Example (5) is repeated below as (16). The reading event must be interpreted as a prolonged, durative event, as its English translation suggests.
(16) ngo5 1sg tai2 read tai2 read ha5 Dur syu1 book fan3 sleep zo2
Perf 'I fell asleep while reading.' As first suggested by Bennett and Partee (1972), all the subparts within an atelic event are homogeneous. It provides a basis to compare an atom of a durative event to a singular member in plural count nouns. That means, the durative reading event in (16) can be seen as a collection of atomic reading subevents. Since these subevents are homogeneous, the whole reading event is considered atelic.
Atelic events can be independently tested with duration modification, which is equivalent to the for / in a period of time test in English. If a predicate can be modified by 'for an hour' (or any other context-appropriate time interval), then the predicate is atelic. For example, John read for an hour is acceptable, whereas *John read in an hour is not. It shows that 'read' is atelic. Cantonese does not use a prepositional phrase to show duration, but uses the verb copying construction like (17) instead 5 . Example (18) is equivalent to in 3 minutes in English. Since only (17) but not (18) is compatible with tai2 syu1 'read', we can conclude that tai2 syu1 is atelic.
(17) ngo5 1sg tai2 read syu1 book tai2 read zo2 Perf saam1-fen1-zong1 3-minute 'I read for 3 minutes.' (18) *ngo5 1sg hai2 in saam1-fen1-zong1 3-minute zi1noi6 within tai2 read syu1 book '*I read (with)in 3 minutes.'
Because tai2 syu1 'read' is atelic, we can say that each instance of reading is identical to other instances within the same event.
What makes verbal reduplication such as (17) different from nominal reduplication is that the members of the reading events are non-quantized and cumulative. Conceptually, an instance of reading counts as reading, no matter how long it lasts. Also, adding up two instances of reading would also be interpreted reading. In other words, atelic predicates such as tai2 syu1 'read book' in Cantonese are cumulative. Let x and y be distinct atomic events, and tai2 syu1 read be predicate over each of them. The interpretations above are formalized in (19) below:
(19) read (x) ∧ read (y) = 1 read (x ⊕ y) = 1
What the durative interpretation of verbal reduplication tells us is that it must denote a sum of multiple subevents as members, otherwise one should be able to find verbal reduplication examples that are punctual (i.e. not durative). However, since the reduplicated verb still denotes one prolonged event, we must account for this difference from nominal reduplication (which denotes a collection of distinct, individuated members) in terms of cumulativity. However, it is also possible for verb reduplication to contain non-cumulative and quantized subevents. Semelfactive verbs, such as jump and knock in English are always punctual, i.e. they cannot be durative. This is shown by the observation that John jumps for an hour would only give the iterative reading that there are more than one jumps in that hour, rather than the reading that one single jump lasts for an hour. The verb tiu3 'jump' in Cantonese works the same way as its English counterpart. Only (20), but not (21), is acceptable 6 .
(20) ngo5 1sg tiu3 jump zo2 Perf saam1-fen1-zong1 3-minute 'I jumped for 3 minutes.' (iterative only) (21) *ngo5 1sg hai2 in saam1-fen1-zong1 3-minute zi1noi6 within tiu3 jump (zo2) Perf '*I jumped (with)in 3 minutes.'
When the verb tiu3 'jump' is reduplicated, as in (22), the only reading allowed is that jumping is iterative, i.e. there must be more than one instance of repeated jumping. The fact that (22) cannot be durative can naturally be explained by the cumulativity and quantization contrast.
(23) jump (x) ∧ jump (y) = 1 (24) jump (y) → ¬y < x = 1 If (23) is true, then (24) is necessarily true, i.e. the atomic event y must not be a proper subpart of the atomic event x (cf. definition (8)). Since tiu3 'jump' is punctual and quantized, the sum of multiple instances of it must be a proper superset of each individual instance, therefore the reduplication is interpreted as an iterative event, but not a durative one.
This section has shown that the summation formulation naturally handles the two kinds of verb reduplication without stipulating summation itself. The choice between durative reading of one instance of the same event and the iterative reading that represents multiple instances can be predicted solely by the nature of the event denoted by the base verb. If the base form is cumulative, the summation function returns a durative event; if the base form is quantized, summation returns an iterative reading.
Adjectives
There are two independent issues in the interpretation of adjectives. The first one concerns the status of reduplication as a semantic function and a syntactic head in the domain of adjectives. The second issue is the apparent contradiction between summation and the hedging and diminutive reading. This section will show that reduplication is indeed one of the variants that denotes degree, alongside hou2 'very' and other degree markers, such as gei2 'fairly'. It will also be shown that the diminutive reading does not contradict summation or plurality in general, echoing previous studies on diminutive reduplication (Abraham, 2005;Kouwenberg and LaCharité, 2005).
Regarding the first issue, the distribution of reduplication shows that the reduplication morpheme should be a functional head asserting some sort of degree. By comparing (25) and (26) Since reduplication and degree markers like hou2 'very' cannot cooccur and one of them must appear in the utterance, they are in complementary distribution and must denote similar function.
Section 3 showed that adjective predicates are internally complex, based on previous studies on scale and degrees. Following Grano (2011) and Grano and Kennedy (2012)'s analysis of Mandarin, elements like 'very' in Chinese denote a morpheme that turns a bare adjective into a degree-marked element 7 . More specifically, the assertion of the degree-marked adjective would involve a morpheme pos , which provides the contextual standard to determine whether the object in question meets the standard for the given property. Since reduplication also denotes the assertion that an entity meets a certain standard, one can say that reduplication shares the same position as pos , by the distribution shown above.
The second issue is the diminutive interpretation as a counterexample of to the present summation theory. Abraham (2005) investigates how reduplication can provide diminutive interpretation, assuming reduplication was a iconic manifestation of multiplicity. The data for diminitive reduplication cited in Abraham (2005) and Kouwenberg and LaCharité (2005) include verbs and adjectives, but the adjective examples are colour terms and other adjectives that can describe part of an entity, as in (28) and (29).
(28) a. Base form: red 'red' b. redi-redi 'reddish, red-spotted'
Caribbean Creoles (Abraham, 2005, p.552) (29) a. Base form: brok 'to break' b. brokii-brokii 'as if broken all over'
Caribbean Creoles (Kouwenberg and LaCharité, 2005, p.538)
The explanation given in Kouwenberg and LaCharité (2005) is that there is an intermediate meaning of 'red-spotted' or 'as if broken all over' which denotes multiple instances of redness (or for (29), breaks). The dispersive reading ('redspotted' or 'as if broken all over') is then extended to diminutive reading ('reddish' or 'fairly/ slightly broken'). In such a theory, both Kouwenberg and LaCharité (2005) and Abraham (2005) claim that reduplication in form does denote a sense of multiplicity, only that the multiplicity is distributed to the same entity. (Kouwenberg and LaCharité, 2005) claim that '(t)he real-world effect of such scattered distribution of colour is to tone down rather than intensify the colour'. Therefore the multiple spots of the colour would result in a diminutive reading, through the dispersive reading. However, the iconicity theory cannot explain the Cantonese examples like (26), where there cannot be a dispersive reading. Since the predicate fei4 'fat' applies to the whole entity 'cat', but not part of it, it is impossible to interpret fei4 fei4 dei2 as 'being fat everywhere / all over' in (26). The cumulative analysis pursued in this paper avoids the problem with dispersive reading. Based on the discussion of distribution above, we can see that bare adjectives (27) are not allowed in the language. If we further assume that adjectival predicates should include the positive morpheme pos for any assertion, the Cantonese data would mean that bare adjectives do not denote the positive degree, since they cannot assert the positive degree.
The cumulativity analysis, on the other hand, explains the correct diminutive interpretation and why no intensification arises. Given the formulation of cumulativity in (7), a predicate is considered cumulative if the sum of the predicate has the same denotation of its atomic elements. Let x and y be two property-denoting variables, each predicated by fat as in (30a):
(30) a. fat(x) ∧ fat(y) = 1 b. ∀x,y [fat(x⊕y)] = 1 c. ∀x,y [fat(x) ∧ fat(y) → ¬ y<x] = 0 (30b) is true because any two instances of being fat conjoined would denote fat . For (30c) to be true, the property-denoting variable (i.e. bare adjectives without degree-marking) y must not be a proper subpart of x. However, this is not the case in the Cantonese data. For example, the belly of a fat cat can be described as fat. The proper subpart does share the same denotation of its whole. We thus conclude that adjectives in Cantonese must be cumulative. Section 5.2 has shown how cumulativity accounts for verb reduplication under durative interpretation. Adjectival reduplication shows a similar pattern. That is, the reduplicated form denotes a cumulative and non-quantized predicate. Cumulativity succeeds in preventing the wrong interpretation for reduplication to denote intensification in Cantonese. By extending the cumulativity analysis to adjectives, it can be seen that reduplication does not necessarily denote 'more' in the quantized sense, even though it denotes a summation function. The apparent contradiction between summation and diminution comes from the wrong comparison. Since atomic bare adjectives do not denote any degree, it would be wrong to compare the degree denoted by reduplication and the non-existing degree denoted by the bare form. Instead, the reduplicated form should be compared to the default degree-marker hou2 'very', as in (25), when one is measuring the intensity or extent of the assertion. Recall that Cantonese requires overt degree-marking, as shown by the unacceptability of (27). Comparing the two options (25) and (26) to assert a positive degree, (25) with hou2 would denote a neutral assertion of positive degree, but it can also be interpreted as emphasis or intensification, whereas (26) gives the diminutive, hedging reading ('slightly, fairly Adj'). Despite being a result of summation from the bare adjective, the degree denoted by reduplication should be compared to the canonical positive assertion, but not the atomic bare adjective. In other words, there is no contradiction between the summation formulation and the diminutive interpretation.
The present account is more powerful than the iconicity-based theory for two reasons. First, cumulativity is a property more widely observed across categories and languages, whereas iconicity is not as prominent in explaining behaviours of various constructions. The present account does not assume either iconicity or any form of symbolism and relies only on the notion of cumulativity, which is independently needed for count/mass distinction or durative events in the language. Second, the iconicity account does not explain the reduplication of adjectives that must describe the entity as a whole, but not a part, such as (26). On the contrary, cumulativity can handle such cases without relying on an intermediate dispersive reading, which is not always available.
The present cumulativity analysis makes the following two predictions: (i) in languages where reduplicated adjectives denote intensification, the adjectives are degree-marked and thus quantized;
(ii) in languages where reduplicated adjectives denote diminution, the adjectives are not marked with degree and thus cumulative. Cantonese adjectives would belong to type (ii). On the one hand, Cantonese adjective reduplication denotes diminution. On the other hand, Cantonese adjectives alone do not carry degree, as revealed by the observation that it requires degree marking.
This analysis does not exclude the possibility that the two options can co-exist in the same language, as we have already observed such cases in Cantonese verbs, where both cumulative and quantized predicates are possible within the same category. Our Cantonese adjectives are exclusively cumulative, but it does not mean that it is impossible for other languages to show categoryinternal variations in terms of cumulativity and quantization. What the present analysis predicts is that the two subtypes of adjectives would each display a different meaning in their respective reduplication forms, if they exist at all in such a language.
This section has explained that adjective reduplication in Cantonese should be treated as diminutive because the atomic bare adjective is cumulative. By showing that we should be comparing reduplication forms only to degree-marked adjectives, instead of the base form, we conclude that there is indeed no contradiction between the summation treatment to reduplication and its diminutive interpretation. By analogy, adjectives without degree-marking are similar to verbs without aspect-marking or nouns without classifiers or determiners, in the sense that bare verbs and bare nouns do not denote instantiated arguments, but only kinds of object or events in an abstract sense.
Implications
The present hypothesis that reduplication denotes summation is confirmed only with Cantonese data. However, it can also be tested by cross-linguistic data. Various pragmatic interpretation are discussed in the literature (Regier, 1994). Regier suggests notions like 'lack of specificity' and 'nonuniformity' as subtypes of meanings that can be denoted by reduplication. These can potentially be formalized as elements with fuzzy boundaries or multiple degrees along a scale. In languages where reduplication denotes intensification, the present analysis can also be extended to account for the increased degree through summation. This would then predict that the reduplicated elements are quantized, since the sum would have a distinct denotation.
The advantage of the present proposal is that the notions of cumulativity and quantization are independently testable without reduplication. For languages that show reduplication, knowing the cumulativity and quantization properties can predict the reading of reduplication. For unreduplicated base forms that are cumulative, such as paau2 bou6 'lit: run step, i.e. to jog' in (31), the present proposal predicts that its reduplicated form would denote the same predicate, i.e. a durative, atelic event. On the other hand, in a base form that is punctual, such as tiu3 sing2 'jump rope' in (32), each instance of jump must be quantized because the sum of two jumps cannot be described as a jump. In this case, it correctly predicts that the felicitous reading in sentence (32) must be iterative, but not a reading of a single prolonged jumpingaction. The present analysis shows that reduplication can be formalized as a summation process 8 , while the difference across categories in their respective interpretations can be resolved with the notions of cumulativity and quantization. This step allows us to apply semantic functions independently of syntactic categories. Since there can be variance of cumulativity and quantization within the same category, as observed in mass nouns and bare plural count nouns being cumulative and quantified count nouns being quantized, the cumulativity and quantization contrast in different cases of reduplication should not be solely attributed to a difference in category. This also raises the question of the traditional notion of 'category'. More precisely, if the semantic functions are shared across categories, then what is the role of categories in grammar? Independently, there are decompositional proposals in syntax that explicitly suggest parallel structure between the nominal and the verbal domains (Borer, 2005a;Borer, 2005b;Megerdoomian, 2008) and between the verbal and adjectival domains (Kennedy and McNally, 2005;Beavers, 2008;Ramchand, 2012). Wouldn't it be desirable to have a unified theory across lexical categories? Due to the limited scope of this study, we will leave the issue here for future research.
Conclusion
The main goal of this paper is to explain the crosscategorial behaviour of reduplication in Cantonese.
This paper has shown that it is possible to interpret reduplicated forms in lexical categories (i.e. nouns, verbs and adjectives) under the same function, summation. Whenever reduplication occurs, the atomic elements are added up and put into a collection. We argue that the difference in interpretations depends solely on the cumulativity and quantization of the element, but not its category. Nominal reduplication returns a superset of its elements, which conforms with the fact that classifier phrases in Cantonese denote individuated elements and is thus quantized. Verbal reduplication can be either cumulative or quantized, depending on the aktionsart of the individual verbal predicate. Adjectival reduplication is cumulative, due to its divisibility into subparts. The present analysis bears two implications. It captures the cross-categorial behaviour in semantic terms and provides a basis for future research on the formal semantic properties of reduplication across languages.
tou5ngo6 hungry 'I (begin to) feel hungry while jumping.' (iterative reading only)
reduplication in Cantonese.(4) go2
CL
go2
CL
sai3lo6
child
*(dou1)
DISTR
hou2
very
lek1
smart
'Every child is very smart.' 1
(5) ngo5
1sg
tai2
read
tai2
read
ha5
Dur
syu1
book
fan3
sleep
zo2
Perf
'I fell asleep while reading.'
( 31 )
31keoi5 3sg tou5ngo6 hungry 'S/he feels hungry while jogging.' S/he feels hungry while jumping rope.'paau2
run
paau2
run
ha5
Asp
bou6
step
gok3dak3
feel
(32) keoi5
3sg
tiu3
jump
tiu3
jump
ha5
Asp
sing2
rope
gok3dak3
feel
tou5ngo6
hungry
'
Abbreviations: CL-classifier, DISTR-distributive marker, Dur-durative aspect, Perf-perfect aspect, 3sg-third person singular pronoun, Prt-particle
Here the term predicate is used in a logical sense, not a linguistic sense. It bears no specification of category such as noun or verb, nor is it restricted to events or properties.PACLIC-27
For detailed discussion syntax of classifier and noun in Cantonese, seeCheng, 2012), which points out two puzzles PACLIC-27
Note that the two occurrences in(17)are not contiguous, thus it is distinct from verb reduplication.
Similar to English, one would judge (21) as acceptable if there was an implicit object that gives some other meaning. (21) intends only the literal meaning of 'jump'.
Cantonese is similar to Mandarin in all the related aspects here.Grano (2011) also notes that(27)can provide implicit comparative reading in a contrastive context, but this is outside the focus of this paper.
A natural next step is to extend the current analysis to the bisyllabic full reduplications, commonly known as the AABB and ABAB patterns. This is, however, beyond the scope of this study.
Acknowledgments I thank Ronnie Wilbur and Chuck Bradley for sharing their insights and critical comments at earlier stages of this project. I am grateful to the three anonymous reviewers for constructive suggestions.
Intensity and diminution triggered by reduplicating morphology: Janus-faced iconicity. W Abraham, Abraham, W. (2005). Intensity and diminution trig- gered by reduplicating morphology: Janus-faced iconicity, pages 547-568.
Scalar complexity and the structure of events. J Beavers, Mouton de GruyterBerlinBeavers, J. (2008). Scalar complexity and the struc- ture of events, pages 245-265. Berlin: Mouton de Gruyter.
Towards the logic of tense and aspect in English. M Bennett, B Partee, Compositionality in Formal Semantics. Report for the System Development CorporationBennett, M. and Partee, B. (1972). Towards the logic of tense and aspect in English. Report for the Sys- tem Development Corporation. Compositionality in Formal Semantics, pages 59-109.
H Borer, Structuring Sense. Oxford University Press1Name OnlyBorer, H. (2005a). Structuring Sense: Volume 1: In Name Only. Oxford University Press.
Structuring Sense: Volume II: The Normal Course of Events. H Borer, Oxford University Press2USABorer, H. (2005b). Structuring Sense: Volume II: The Normal Course of Events, volume 2. Oxford Uni- versity Press, USA.
Identification of reduplication in Bengali corpus and their semantic analysis: A rule-based approach. T Chakraborty, S Bandyopadhyay, 23rd International Conference on Computational Linguistics. 73Chakraborty, T. and Bandyopadhyay, S. (2009). Iden- tification of reduplication in Bengali corpus and their semantic analysis: A rule-based approach. In 23rd International Conference on Computational Linguistics, page 73.
Counting and classifiers. L L S Cheng, Oxford University PressCheng, L. L. S. (2012). Counting and classifiers. Ox- ford University Press.
Standard Basque: A progressive grammar. R P De Rijk, A De Coene, MIT PressDe Rijk, R. P. and De Coene, A. (2008). Standard Basque: A progressive grammar. MIT Press.
Mandarin transitive comparatives and the grammar of measurement. T Grano, C Kennedy, Journal of East Asian Linguistics. 21Grano, T. and Kennedy, C. (2012). Mandarin transi- tive comparatives and the grammar of measurement. Journal of East Asian Linguistics, 21:219-266.
Semantics in generative grammar. I Heim, A Kratzer, Wiley-Blackwell13Heim, I. and Kratzer, A. (1998). Semantics in genera- tive grammar, volume 13. Wiley-Blackwell.
Scale structure, degree modification, and the semantics of gradable predicates. Language. C Kennedy, L Mcnally, Kennedy, C. and McNally, L. (2005). Scale structure, degree modification, and the semantics of gradable predicates. Language, pages 345-381.
Less is more: Evidence from diminutive reduplication in Carribbean Creole languages. S Kouwenberg, D Lacharité, Kouwenberg, S. and LaCharité, D. (2005). Less is more: Evidence from diminutive reduplication in Carribbean Creole languages. pages 533-545.
The origins of telicity. Events and grammar. M Krifka, 197235Krifka, M. (1998). The origins of telicity. Events and grammar, 197:235.
The mereological approach to aspectual composition. M Krifka, Conference Perspectives on Aspect. University. Krifka, M. (2001). The mereological approach to as- pectual composition. Conference Perspectives on Aspect. University of Utrecht, OTS.
Reduplication of adjectives in Chinese: a default state. C.-S L Liu, Journal of East Asian Linguistics. Liu, C.-S. L. (2012). Reduplication of adjectives in Chinese: a default state. Journal of East Asian Lin- guistics.
Parallel nominal and verbal projections. Current studies in linguistics series. K Megerdoomian, 4573Megerdoomian, K. (2008). Parallel nominal and ver- bal projections. Current studies in linguistics series, 45:73.
G Ramchand, Scalar structure across categories: V, P vs. A*. CASTL. Univeristy of TromsøRamchand, G. (2012). Scalar structure across cate- gories: V, P vs. A*. CASTL, Univeristy of Tromsø.
A preliminary study of the semantics of reduplication. T Regier, Regier, T. (1994). A preliminary study of the semantics of reduplication.
Structuring Events. S Rothstein, Blackwell, C Rubino, The World Atlas of Language Structures Online. Max Planck Digital Library. Dryer, M. S. and Haspelmath, M.MunichReduplicationRothstein, S. (2004). Structuring Events. Blackwell. Rubino, C. (2011). Reduplication. In Dryer, M. S. and Haspelmath, M., editors, The World Atlas of Language Structures Online. Max Planck Digital Li- brary, Munich.
Some topics in comparative constructions. M Xiang, Michigan State UniversityPhD thesisXiang, M. (2005). Some topics in comparative con- structions. PhD thesis, Michigan State University.
Classifier Structures in Mandarin Chinese. N N Zhang, Mouton de GruyterBerlinZhang, N. N. (2013). Classifier Structures in Mandarin Chinese. Berlin: Mouton de Gruyter. |
249,204,420 | DELA Project: Document-level Machine Translation Evaluation | This paper presents the results of the Document-level Machine Translation Evaluation (DELA) Project, a two-year project which started in September 2020 funded by the Irish Research Council. This paper describes the results of the project to date, as well as its latest developments. | [
245855945,
199501730,
229365773
] | DELA Project: Document-level Machine Translation Evaluation
Sheila Castilho sheila.castilho@adaptcentre.ie
ADAPT Centre School of Computing
Dublin City University
DELA Project: Document-level Machine Translation Evaluation
This paper presents the results of the Document-level Machine Translation Evaluation (DELA) Project, a two-year project which started in September 2020 funded by the Irish Research Council. This paper describes the results of the project to date, as well as its latest developments.
Introduction
The challenge of evaluating translations in context has been raising interest in the machine translation (MT) field. However, the definition of what constitutes a document-level (doc-level) MT evaluation, in terms of how much of the text needs to be shown, is still unclear . Few works have taken into account doc-level human evaluation (Barrault et al., 2020), and one common practice is the usage of test suites with context-aware markers. However, test suites with document-level boundaries are still scarce (Rysová et al., 2019). The main objective of the DELA Project is to define best practices for doc-level MT evaluation, and test the existing human and automatic sentence-level evaluation metrics to the doclevel. We present here the results from the project to date, as well as the upcoming research to be carried out.
Context Span for MT
In Castilho et al. (2020), we tested the context span, that is, the length of context necessary, for the translation of 300 sentences in three different domains (reviews, subtitles, and literature) and
Using the issues found in Castilho et al. (2020), we developed the DELA corpus, a doc-level corpus annotated with context-aware issues when translating from English into Portuguese, namely gender, number, ellipsis, reference, lexical ambiguity, and terminology . The corpus contains 60 full documents and was compiled with six different domains: subtitles, literary, news, reviews, medical, and legislation; and can be used as a challenge test set, training/testing corpus for MT and quality estimation, and for deep linguistic analysis of context issues. 2
Examining Context-Related Issues
Using the DELA Corpus, we examine the shortest context span necessary to solve the issues annotated in the corpus, and categorise the types of contexts according to their position, and report the i) Context Position, and ii) Context Length We find that the shortest context span might appear in different positions in the document including preceding, following, global, world knowledge. The average length depends on the issue types as well as the domain. The results show that the standard approach of relying on only two preceding sentences as context might not be enough depending on the domain and issue types.
Latest Developments
The DELA Project, running until September 2022, will focus now on the human and automatic evaluation metrics for MT, testing and developing new ways to use them for doc-level evaluation. Doc-level human and automatic evaluation metrics: The focus of the DELA Project is to answer the following research questions: i) Are the stateof-the-art (SOTA) human and automatic evaluation metrics able to capture the quality level of the doc-level systems realistically?; and ii) Can/should they be modified or do new ones are needed?
A series of experiments with the SOTA human metrics are being carried out, informed by the best methodologies found in previous results. With that, we will determine whether these metrics can be used in doc-level evaluations, or if new metrics should (and could) be developed. The doc-level human evaluation will inform automatic metrics to
© 2022 The authors. This article is licensed under a Creative Commons 3.0 licence, no derivative works, attribution, CC-BY-ND. showed that over 33% of the sentences tested required more context than the sentence itself to be translated or evaluated, and from those, 23% required more than two previous sentences to be properly evaluated. Ambiguity, terminology, and gender agreement were the most common issues to hinder translation, and moreover, there were observable differences in issues and context span between domains.3 Doc-Level Evaluation methodologyInCastilho (2020;2021), we tested the differences in inter-annotator agreement (IAA) between single-sentence and doc-level setups. First, translators evaluated the MT output in terms of fluency, adequacy, ranking and error annotation in: (i) one score per single isolated sentence, and (ii) one score per document. Then, the doc-level setup was modified, and translators evaluated (i) random single sentences, (ii) individual sentences with access to the full source and MT output, and (iii) full documents. Results showed that assessing individual sentences within the context of a document yields a higher IAA compared to the random single-sentence methodology, while when translators give one score per document, IAA is much lower. Assigning one score per sentence in context avoids misevaluation cases, extremely common in the random sentences-based evaluation setups.1 The higher IAA agreement in the random single-sentence setup is because raters tend to accept the translation when adequacy is ambiguous but the translation is correct, especially if it is fluent.1 Without context, the sentence 'I am satisfied' translated into Portuguese in the masculine 'Eu estou satisfeito' will get a perfect score even when the gender of the pronoun I is feminine ('satisfeitA').
The corpus and annotation guides can be found at: https: //github.com/SheilaCastilho/DELA-Project be used for document-level systems. Doc-level evaluation tool: The DELA project will gather specification from translators to design a translation evaluation tool which will provide an environment to assess MT quality at a doclevel with human and automatic evaluation metrics scores specified as best suited for doc-level evaluation in the project. The tool will be made freely available.
Acknowledgements: This project is funded by the Irish Research Council (GOIPD/2020/69). ADAPT, the Science Foundation Ireland Research Centre for AI-Driven Digital Content Technology at Dublin City University, is funded by the Science Foundation Ireland through the SFI Research Centres Programme (Grant 13/RC/2106 P2).
Findings of the 2020 conference on machine translation (WMT20). Loïc Barrault, Magdalena Biesialska, Ondřej Bojar, Marta R Costa-Jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, Online, November. Association for Computational Linguistics. Proceedings of the Fifth Conference on Machine TranslationBarrault, Loïc, Magdalena Biesialska, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Yvette Graham, Roman Grundkiewicz, Barry Haddow, and et al. 2020. Findings of the 2020 conference on machine translation (WMT20). In Proceedings of the Fifth Conference on Machine Translation, pages 1-55, Online, November. Association for Computa- tional Linguistics.
On Context Span Needed for MT Evaluation. Sheila Castilho, Maja Popović, Andy Way, Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC'20). the Twelfth International Conference on Language Resources and Evaluation (LREC'20)Marseille, FranceCastilho, Sheila, Maja Popović, and Andy Way. 2020. On Context Span Needed for MT Evaluation. In Proceedings of the Twelfth International Conference on Language Resources and Evaluation (LREC'20), page 3735-3742, Marseille, France, May.
Dela corpus -a document-level corpus annotated with contextrelated issues. Sheila Castilho, João Lucas Cavalheiro, Miguel Camargo, Andy Menezes, Way, Proceedings of the Sixth Conference on Machine Translation. the Sixth Conference on Machine TranslationAssociation for Computational LinguisticsCastilho, Sheila, João Lucas Cavalheiro Camargo, Miguel Menezes, and Andy Way. 2021. Dela corpus -a document-level corpus annotated with context- related issues. In Proceedings of the Sixth Confer- ence on Machine Translation, pages 571-582. Asso- ciation for Computational Linguistics.
On the same page? comparing IAA in sentence and document level human mt evaluation. Sheila Castilho, Proceedings of the Fifth Conference on Machine Translation. the Fifth Conference on Machine TranslationAssociation for Computational LinguisticsCastilho, Sheila. 2020. On the same page? compar- ing IAA in sentence and document level human mt evaluation. In Proceedings of the Fifth Conference on Machine Translation, pages 1150-1159. Associ- ation for Computational Linguistics, November.
Towards document-level human MT evaluation: On the issues of annotator agreement, effort and misevaluation. Sheila Castilho, Proceedings of the Workshop on Human Evaluation of NLP Systems. the Workshop on Human Evaluation of NLP SystemsAssociation for Computational LinguisticsCastilho, Sheila. 2021. Towards document-level hu- man MT evaluation: On the issues of annotator agreement, effort and misevaluation. In Proceedings of the Workshop on Human Evaluation of NLP Sys- tems, pages 34-45. Association for Computational Linguistics, April.
A test suite and manual evaluation of document-level NMT at WMT19. Kateřina Rysová, Magdaléna Rysová, Tomáš Musil, Lucie Poláková, Ondřej Bojar, Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational Linguistics2Shared Task Papers, Day 1)Rysová, Kateřina, Magdaléna Rysová, Tomáš Musil, Lucie Poláková, and Ondřej Bojar. 2019. A test suite and manual evaluation of document-level NMT at WMT19. In Proceedings of the Fourth Confer- ence on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 455-463, Florence, Italy, August. Association for Computational Linguistics. |
231,709,861 | Benchmarking Machine Reading Comprehension: A Psychological Perspective | Machine reading comprehension (MRC) has received considerable attention as a benchmark for natural language understanding. However, the conventional task design of MRC lacks explainability beyond the model interpretation, i.e., reading comprehension by a model cannot be explained in human terms. To this end, this position paper provides a theoretical basis for the design of MRC datasets based on psychology as well as psychometrics, and summarizes it in terms of the prerequisites for benchmarking MRC. We conclude that future datasets should (i) evaluate the capability of the model for constructing a coherent and grounded representation to understand contextdependent situations and (ii) ensure substantive validity by shortcut-proof questions and explanation as a part of the task design. | [
209485573,
3178759,
53296520,
4311819,
9192723,
52055325,
212644640,
5071138,
128296356,
174803111,
52158121,
3871146,
52019251,
1002552,
202542404,
11816014,
201698258,
173188058,
2381275,
59553499,
2100831,
1167588,
52113519,
47018994,
52156147,
5761781,
30866421,
52165754
] | Benchmarking Machine Reading Comprehension: A Psychological Perspective
April 19 -23, 2021
Saku Sugawara
National Institute of Informatics
Pontus Stenetorp p.stenetorp@cs.ucl.ac.uk
University College London
Akiko Aizawa aizawa@nii.ac.jp
National Institute of Informatics
Benchmarking Machine Reading Comprehension: A Psychological Perspective
Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics
the 16th Conference of the European Chapter of the Association for Computational LinguisticsApril 19 -23, 20211592
Machine reading comprehension (MRC) has received considerable attention as a benchmark for natural language understanding. However, the conventional task design of MRC lacks explainability beyond the model interpretation, i.e., reading comprehension by a model cannot be explained in human terms. To this end, this position paper provides a theoretical basis for the design of MRC datasets based on psychology as well as psychometrics, and summarizes it in terms of the prerequisites for benchmarking MRC. We conclude that future datasets should (i) evaluate the capability of the model for constructing a coherent and grounded representation to understand contextdependent situations and (ii) ensure substantive validity by shortcut-proof questions and explanation as a part of the task design.
Introduction
Evaluation of natural language understanding (NLU) is a long-standing goal in the field of artificial intelligence. Machine reading comprehension (MRC) is a task that tests the ability of a machine to read and understand unstructured text and could be the most suitable task for evaluating NLU because of its generic formulation (Chen, 2018). Recently, many large-scale datasets have been proposed, and deep learning systems have achieved human-level performance for some of these datasets.
However, analytical studies have shown that MRC models do not necessarily achieve humanlevel understanding. For example, Jia and Liang (2017) use manually crafted adversarial examples to show that successful systems are easily distracted. Sugawara et al. (2020) show that a significant part of already solved questions is solvable even after shuffling the words in a sentence or dropping content words. These studies demonstrate that we cannot explain what type of understanding is required by the datasets and is actually acquired by models. Although benchmarking MRC is related to the intent behind questions and is critical to test hypotheses from a top-down viewpoint (Bender and Koller, 2020), its theoretical foundation is poorly investigated in the literature.
In this position paper, we examine the prerequisites for benchmarking MRC based on the following two questions: (i) What does reading comprehension involve? (ii) How can we evaluate it? Our motivation is to provide a theoretical basis for the creation of MRC datasets. As Gilpin et al. (2018) indicate, interpreting the internals of a system is closely related to only the system's architecture and is insufficient for explaining how the task is accomplished. This is because even if the internals of models can be interpreted, we cannot explain what is measured by the datasets. Therefore, our study focuses on the explainability of the task rather than the interpretability of models.
We first overview MRC and review the analytical literature that indicates that existing datasets might fail to correctly evaluate their intended behavior (Section 2). Subsequently, we present a psychological study of human reading comprehension in Section 3 for answering the what question. We argue that the concept of representation levels can serve as a conceptual hierarchy for organizing the technologies in MRC. Section 4 focuses on answering the how question. Here, we implement psychometrics to analyze the prerequisites for the task design of MRC. Furthermore, we introduce the concept of construct validity, which emphasizes validating the interpretation of the task's outcome. Finally, in Section 5, we explain the application of the proposed concepts into practical approaches, highlighting potential future directions toward the advancement of MRC. Regarding the what question, we indicate that datasets should evaluate the capability of the situation model, which refers to the construction Representation levels in human reading comprehension: (A) surface structure, (B) textbase, and (C) situation model. (A) Linguistic-level sentence understanding, (B) comprehensiveness of skills for inter-sentence understanding, and (C) evaluation of coherent representation grounded to non-textual information.
(C) Dependence of context on defeasibility and novelty, and grounding to non-textual information with a long passage.
How can we evaluate reading comprehension?
Construct validity in psychometrics: (1) content, (2) substantive, (3) structural, (4) generalizability, (5) external, and (6) consequential aspects.
(1) Wide coverage of skills, (2) evaluation of the internal process, (3) structured metrics, (4) reliability of metrics, (5) comparison with external variables, and (6) robustness to adversarial attacks and social biases.
(2) Creating shortcutproof questions by filtering and ablation, and designing a task for validating the internal process. of a coherent and grounded representation of text based on human understanding. Regarding the how question, we argue that among the important aspects of the construct validity, substantive validity must be ensured, which requires the verification of the internal mechanism of comprehension. Table 1 provides an overview of the perspectives taken in this paper. Our answers and suggestions to the what and how questions are summarized as follows: (1) Reading comprehension is the process of creating a situation model that best explains given texts and the reader's background knowledge. The situation model should be the next focal point in future datasets for benchmarking the human-level reading comprehension. (2) To evaluate reading comprehension correctly, the task needs to provide a rubric (scoring guide) for sufficiently covering the aspects of the construct validity. In particular, the substantive validity should be ensured by creating shortcut-proof questions and by designing a task formulation that is explanatory itself.
2 Task Overview
Task Variations and Existing Datasets
MRC is a task in which a machine is given a document (context) and it answers the questions based on the context. Burges (2013) provides a general definition of MRC, i.e., a machine comprehends a passage of text if, for any question regarding that text that can be answered correctly by a majority of native speakers, that machine can provide a string which those speakers would agree both answers that question. We overview various aspects of the task along with representative datasets as follows. Existing datasets are listed in Appendix A.
Context Styles A context can be given in various forms with different lengths such as a single pas-sage (MCTest (Richardson et al., 2013)), a set of passages (HotpotQA (Yang et al., 2018)), a longer document (CBT (Hill et al., 2016)), or open domain (Chen et al., 2017). In some datasets, a context includes non-textual information such as images (RecipeQA (Yagcioglu et al., 2018)).
Question Styles A question can be an interrogative sentence (in most datasets), a fill-in-theblank sentence (cloze) (CLOTH (Xie et al., 2018)), knowledge base entries (QAngaroo (Welbl et al., 2018)) and search engine queries (MSMARCO (Nguyen et al., 2016)).
Answering Styles An answer can be (i) chosen from a text span of the given document (answer extraction) (NewsQA (Trischler et al., 2017)), (ii) chosen from a candidate set of answers (multiple choice) (MCTest (Richardson et al., 2013)), or (iii) generated as a free-form text (description) (Narra-tiveQA (Kočiský et al., 2018)). Some datasets optionally allow answering by a yes/no reply (BoolQ (Clark et al., 2019)).
Sourcing Methods Initially, questions in smallscale datasets are created by experts (QA4MRE (Sutcliffe et al., 2013)). Later, fueling the development of neural models, most published datasets have more than a hundred thousand questions that are automatically created (CNN/Daily Mail (Hermann et al., 2015)), crowdsourced (SQuAD v1.1 (Rajpurkar et al., 2016)), and collected from examinations (RACE (Lai et al., 2017)).
Domains The most popular domain is Wikipedia articles (Natural Questions (Kwiatkowski et al., 2019)), but news articles are also used (Who-did-What (Onishi et al., 2016)). CliCR (Suster and Daelemans, 2018) and emrQA (Pampari et al., 2018) are datasets in the clinical domain. DuoRC (Saha et al., 2018) uses movie scripts.
Specific Skills Several recently proposed datasets require specific skills including unanswerable questions (SQuAD v2.0 (Rajpurkar et al., 2018)), dialogues (CoQA (Reddy et al., 2019), DREAM (Sun et al., 2019)), multiple-sentence reasoning (MultiRC (Khashabi et al., 2018)), multi-hop reasoning (HotpotQA (Yang et al., 2018)), mathematical and set reasoning (DROP (Dua et al., 2019)), commonsense reasoning (CosmosQA (Huang et al., 2019)), coreference resolution (QuoRef (Dasigi et al., 2019)), and logical reasoning (ReClor (Yu et al., 2020)).
Benchmarking Issues
In some datasets, the performance of machines has already reached human-level performance. However, Jia and Liang (2017) indicate that models can easily be fooled by manual injection of distracting sentences. Their study revealed that questions simply gathered by crowdsourcing without careful guidelines or constraints are insufficient to evaluate precise language understanding. This argument is supported by further studies across a variety of datasets. For example, Min et al. (2018) find that more than 90% of the questions in SQuAD (Rajpurkar et al., 2016) require obtaining an answer from a single sentence despite being provided with a passage. Sugawara et al. (2018) show that large parts of twelve datasets are easily solved only by looking at a few first question tokens and attending the similarity between the given questions and the context. Similarly, Feng et al. (2018) and Mudrakarta et al. (2018) demonstrate that models trained on SQuAD do not change their predictions even when the question tokens are partly dropped. Kaushik and Lipton (2018) also observe that question-and passage-only models perform well for some popular datasets. Min et al. (2019) and Chen and Durrett (2019) concurrently indicate that for multi-hop reasoning datasets, the questions are solvable only with a single paragraph and thus do not require multi-hop reasoning over multiple paragraphs. Zellers et al. (2019b) report that their dataset unintentionally contains stylistic biases in the answer options which are embedded by a language-based model.
Overall, these investigations highlight a grave issue of the task design, i.e., even if the models achieve human-level accuracies, we cannot prove that they successfully perform reading comprehen-sion. This issue may be attributed to the low interpretability of black-box neural network models. However, a problem is that we cannot explain what is measured by the datasets even if we can interpret the internals of models. We speculate that this benchmarking issue in MRC can be attributed to the following two points: (i) we do not have a comprehensive theoretical basis of reading comprehension for specifying what we should ask (Section 3) and (ii) we do not have a well-established methodology for creating a dataset and for analyzing a model based on it (Section 4). 1 In the remainder of this paper, we argue that these issues can be addressed by using insights from the psychological study of reading comprehension and by implementing psychometric means of validation.
Reading Comprehension from
Psychology to MRC
Computational Model in Psychology
Human text comprehension has been studied in psychology for a long time (Kintsch and Rawson, 2005;Graesser et al., 1994;Kintsch, 1988). Connectionist and computational architectures have been proposed for such comprehension including a mechanism pertinent to knowledge activation and memory storing. Among the computational models, the construction-integration (CI) model is the most influential and provides a strong foundation of the field (McNamara and Magliano, 2009). The CI model assumes three different representation levels as follows:
• Surface structure is the linguistic information of particular words, phrases, and syntax obtained by decoding the raw textual input.
• Textbase is a set of propositions in the text, where the propositions are locally connected by inferences (microstructure).
• Situation model is a situational and coherent mental representation in which the propositions are globally connected (macrostructure), and it is often grounded to not only texts but also to sounds, images, and background information.
The CI model first decodes textual information (i.e., the surface structure) from the raw textual 1 These two issues loosely correspond to the plausibility and faithfulness of explanation (Jacovi and Goldberg, 2020). The plausibility is linked to what we expect as an explanation, whereas the faithfulness refers to how accurately we explain models' reasoning process. input, then creates the propositions (i.e., textbase) and their local connections occasionally using the reader's knowledge (construction), and finally constructs a coherent representation (i.e., situation model) that is organized according to five dimensions including time, space, causation, intentionality, and objects (Zwaan and Radvansky, 1998), which provides a global description of the events (integration). These steps are not exclusive, i.e., propositions are iteratively updated in accordance with the surrounding ones with which they are linked. Although the definition of successful text comprehension can vary, Hernández-Orallo (2017) indicates that comprehension implies the process of creating (or searching for) a situation model that best explains the given text and the reader's background knowledge (Zwaan and Radvansky, 1998). We use this definition to highlight that the creation of a situation model plays a vital role in human reading comprehension.
Our aim in this section is to provide a basis for explaining what reading comprehension is, which requires terms for explanation. In the computational model above, the representation levels appear to be useful for organizing such terms. We ground existing NLP technologies and tasks to different representation levels in the next section.
Skill Hierarchy for MRC
Here, we associate the existing NLP tasks with the three representation levels introduced above. The biggest advantage of MRC is its general formulation, which makes it the most general task for evaluating NLU. This emphasizes the importance of the requirement of various skills in MRC, which can serve as the units for the explanation of reading comprehension. Therefore, our motivation is to provide an overview of the skills as a hierarchical taxonomy and to highlight the missing aspects in existing MRC datasets that are required for comprehensively covering the representation levels.
Existing Taxonomies We first provide a brief overview of the existing taxonomies of skills in NLU tasks. For recognizing textual entailment (Dagan et al., 2006), several studies present a classification of reasoning and commonsense knowledge (Bentivogli et al., 2010;Sammons et al., 2010;LoBue and Yates, 2011). MRC (Clark et al., 2018). A limitation of both these studies is that the proposed sets of knowledge and inference are limited to the domain of elementary-level science. Although some existing datasets for MRC have their own classifications of skills, they are coarse and only cover a limited extent of typical NLP tasks (e.g., word matching and paraphrasing). In contrast, for a more generalizable definition, Sugawara et al. (2017) propose a set of 13 skills for MRC. Rogers et al. (2020) pursue this direction by proposing a set of questions with eight question types. In addition, Schlegel et al. (2020) propose an annotation schema to investigate requisite knowledge and reasoning. Dunietz et al. (2020) propose a template of understanding that consists of spatial, temporal, causal, and motivational questions to evaluate precise understanding of narratives with reference to human text comprehension.
In what follows, we describe the three representation levels that basically follow the three representations of the CI model but are modified for MRC. The three levels are shown in Figure 1. We emphasize that we do not intend to create exhaustive and rigid definitions of skills. Rather, we aim to place them in a hierarchical organization, which can serve as a foundation to highlight the missing aspects in the current MRC.
Surface Structure This level broadly covers the linguistic information and its semantic meaning, which can be based on the raw textual input. Although these features form a proposition according to psychology, it should be viewed as sentencelevel semantic representation in computational linguistics. This level includes part-of-speech tagging, syntactic parsing, dependency parsing, punctuation recognition, named entity recognition (NER), and semantic role labeling (SRL). Although these basic tasks can be accomplished by some recent pretraining-based neural language models (Liu et al., 2019), they are hardly required in NLU tasks including MRC. In the natural language inference task, McCoy et al. (2019) indicate that existing datasets (e.g., Bowman et al. (2015)) may fail to elucidate the syntactic understanding of given sentences. Although it is not obvious that these basic tasks should be included in MRC and it is not easy to circumscribe linguistic knowledge from concrete and abstract knowledge (Zaenen et al., 2005;Manning, 2006), we should always care about the capabilities of basic tasks (e.g., use of checklists Construct the global structure of propositions. Skills: creating a coherent representation and grounding it to other media.
Construct the local relations of propositions. Skills: recognizing relations between sentences such as coreference resolution, knowledge reasoning, and understanding discourse relations.
Creating propositions from the textual input. Skills: syntactic and dependency parsing, POS tagging, SRL, and NER. Textbase This level covers local relations of propositions in the computational model of reading comprehension. In the context of NLP, it refers to various types of relations linked between sentences. These relations not only include the typical relations between sentences (discourse relations) but also the links between entities. Consequently, this level includes coreference resolution, causality, temporal relations, spatial relations, text structuring relations, logical reasoning, knowledge reasoning, commonsense reasoning, and mathematical reasoning. We also include multi-hop reasoning (Welbl et al., 2018) at this level because it does not necessarily require a coherent global representation over a given context. For studying the generalizability of MRC, Fisch et al. (2019) propose a shared task featuring training and testing on multiple domains. and Khashabi et al. (2020) also find that training on multiple datasets leads to robust generalization. However, unless we make sure that datasets require various skills with sufficient coverage, it might remain unclear whether we evaluate a model's transferability of the reading comprehension ability.
Situation Model This level targets the global structure of propositions in human reading comprehension. It includes a coherent and situational representation of a given context and its grounding to the non-textual information. A coherent representation has well-organized sentence-to-sentence transitions (Barzilay and Lapata, 2008), which are vital for using procedural and script knowledge (Schank and Abelson, 1977). This level also includes characters' goals and plans, meta perspective including author's intent and attitude, thematic understanding, and grounding to other media. Most existing MRC datasets seem to struggle to target the situation model. We discuss further in Section 5.1.
Passage: The princess climbed out the window of the high tower and climbed down the south wall when her mother was sleeping. She wandered out a good way. Finally, she went into the forest where there are no electric poles. Example The representation levels in the example shown in Figure 2 are described as follows. Q1 is at the surface-structure level where a reader only needs to understand the subject of the first event. We expect that Q2 requires understanding of relations among described entities and events at the textbase level; the reader may need to understand who she means using coreference resolution.
Escaping in Q2 also requires the reader's commonsense to associate it with the first event. However, the reader might be able to answer this question only by looking for a place (specified by where) described in the passage, thereby necessitating the validity of the question to correctly evaluate the understanding of the described events. Q3 is an example that requires imagining a different situation at the situation-model level, which could be further associated with a grounding question such as which figure best depicts the given passage?
In summary, we indicate that the following features might be missing in existing datasets:
• Considering the capability to acquire basic understanding of the linguistic-level information.
• Ensuring that the questions comprehensively specify and evaluate textbase-level skills.
• Evaluating the capability of the situation model in which propositions are coherently organized and are grounded to non-textual information.
Should MRC models mimic human text comprehension? In this paper, we do not argue that MRC models should mimic human text comprehension. However, when we design an NLU task and create datasets for testing human-like linguistic generalization, we can refer to the aforementioned features to frame the intended behavior to evaluate in the task. As Linzen (2020) discusses, the task design is orthogonal to how the intended behavior is realized at the implementation level (Marr, 1982).
MRC on Psychometrics
In this section, we provide a theoretical foundation for the evaluation of MRC models. When MRC measures the capability of reading comprehension, validation of the measurement is crucial to obtain a reliable and useful explanation. Therefore, we focus on psychometrics-a field of study concerned with the assessment of the quality of psychological measurement (Furr, 2018). We expect that the insights obtained from psychometrics can facilitate a better task design. In Section 4.1, we first review the concept of validity in psychometrics. Subsequently, in Section 4.2, we examine the aspects that correspond to construct validity in MRC and then indicate the prerequisites for verifying the intended explanation of MRC in its task design.
Construct Validity in Psychometrics
According to psychometrics, construct validity is necessary to validate the interpretation of outcomes of psychological experiments. 2 Messick (1995) report that construct validity consists of the six aspects shown in Table 2.
In the design of educational and psychological measurement, these aspects collectively provide verification questions that need to be answered for justifying the interpretation and use of test scores. In this sense, the construct validation can be viewed as an empirical evaluation of the meaning and consequence of measurement. Given that MRC is intended to capture the reading comprehension ability, the task designers need to be aware of these validity aspects. Otherwise, users of the task cannot justify the score interpretation, i.e., it cannot be confirmed that successful systems actually perform intended reading comprehension. Table 2 also raises MRC features corresponding to the six aspects of construct validity. In what follows, we elaborate on these correspondings and discuss the missing aspects that are needed to achieve the construct validity of the current MRC.
Construct Validity in MRC
Content Aspect As discussed in Section 3, sufficiently covering the skills across all the representation levels is an important requirement of MRC. It may be desirable that an MRC model is simultaneously evaluated on various skill-oriented examples.
Substantive Aspect This aspect appraises the evidence for the consistency of model behavior. We consider that this is the most important aspect for explaining reading comprehension, a process that subsumes various implicit and complex steps. To obtain a consistent response from an MRC system, it is necessary to ensure that the questions correctly assess the internal steps in the process of reading comprehension. However, as stated in Section 2.2, most existing datasets fail to verify that a question is solved by using an intended skill, which implies that it cannot be proved that a successful system can actually perform intended comprehension.
Structural Aspect Another issue in the current MRC is that they only provide simple accuracy as a metric. Given that the substantive aspect necessitates the evaluation of the internal process of reading comprehension, the structure of metrics needs to reflect it. However, a few studies have attempted to provide a dataset with multiple metrics. For example, Yang et al. (2018) not only ask for the answers to questions but also provide sentencelevel supporting facts. This metric can also evaluate the process of multi-hop reasoning whenever the supporting sentences need to be understood for answering a question. Therefore, we need to consider both substantive and structural aspects.
Generalizability Aspect The generalizability of MRC can be understood from the reliability of metrics and the reproducibility of findings. For the reliability of metrics, we need to take care of the reliability of gold answers and model predictions. Regarding the accuracy of answers, the performance of the model becomes unreliable when the answers are unintentionally ambiguous or impractical. Because the gold answers in most datasets are only decided by the majority vote of crowd workers, the ambiguity of the answers is not considered. It
Validity aspects
Definition in psychometrics Correspondence in MRC 1. Content Evidence of content relevance, representativeness, and technical quality.
Questions require reading comprehension skills with sufficient coverage and representativeness over the representation levels.
Substantive
Theoretical rationales for the observed consistencies in the test responses including task performance of models.
Questions correctly evaluate the intended intermediate process of reading comprehension and provide rationales to the interpreters.
3. Structural Fidelity of the scoring structure to the structure of the construct domain at issue.
Correspondence between the task structure and the score structure.
4. Generalizability Extent to which score properties and interpretations can be generalized to and across population groups, settings, and tasks.
Reliability of test scores in correct answers and model predictions, and applicability to other datasets and models.
External
Extent to which the assessment scores' relationship with other measures and non-assessment behaviors reflect the expected relations.
Comparison of the performance of MRC with that of other NLU tasks and measurements.
Consequential
Value implications of score interpretation as a basis for the consequences of test use, especially regarding the sources of invalidity related to issues of bias, fairness, and distributive justice.
Considering the model vulnerabilities to adversarial attacks and social biases of models and datasets to ensure the fairness of model outputs. Summary: Design of Rubric Given the validity aspects, our suggestion is to design a rubric (scoring guide used in education) of what reading comprehension we expect is evaluated in a dataset; this helps to inspect detailed strengths and weaknesses of models that cannot be obtained only by simple accuracy. The rubric should not only cover various linguistic phenomena (the content aspect) but also involve different levels of intermediate evaluation in the reading comprehension process (the substantive and structural aspects) as well as stress testing of adversarial attacks (the consequential aspect). The rubric is in a similar motivation with dataset statements (Bender and Friedman, 2018;Gebru et al., 2018); however, taking the validity aspects into account would improve its substance.
Future Directions
This section discusses future potential directions toward answering the what and how questions in Sections 3 and 4. In particular, we infer that the situation model and substantive validity are critical for benchmarking human-level MRC.
What Question: Situation Model
As mentioned in Section 3, existing datasets fail to fully assess the ability of creating the situation model. As a future direction, we suggest that the task should deal with two features of the situation model: context dependency and grounding.
Context-dependent Situations
A vital feature of the situation model is that it is conditioned on a given text, i.e., a representation is constructed distinctively depending on the given context. We elaborate it by discussing the two key features: defeasibility and novelty.
Defeasibility The defeasibility of a constructed representation implies that a reader can modify and revise it according to the newly acquired information (Davis and Marcus, 2015;Schubert, 2015). The defeasibility of NLU has been tackled in the task of if-then reasoning (Sap et al., 2019a), abductive reasoning (Bhagavatula et al., 2020), counterfactual reasoning (Qin et al., 2019), or contrast sets (Gardner et al., 2020). A possible approach in MRC is that we ask questions against a set of modified passages that describe slightly different situations, where the same question can lead to different conclusions.
Novelty An example showing the importance of contextual novelty is Could a crocodile run a steeplechase? by Levesque (2014). This question poses a novel situation where the solver needs to combine multiple commonsense knowledge to derive the correct answer. If non-fiction documents, such as newspaper and Wikipedia articles, are only used, some questions require only the reasoning of facts already known in web-based corpus. Fictional narratives may be a better source for creating questions on novel situations.
Grounding to Other Media
In MRC, grounding texts to non-textual information is not fully explored yet. Kembhavi et al. We might also need to account for the scope of grounding (Bisk et al., 2020), i.e., ultimately understanding human language in a social context beyond simply associating texts with perceptual information.
How Question: Substantive Validity
Substantive validity requires us to ensure that the questions correctly assess the internal steps of reading comprehension. We discuss two approaches for this challenge: creating shortcut-proof questions and ensuring the explanation by design.
Shortcut-proof Questions
Gururangan et al. (2018) Zellers et al. (2018) propose a model-based adversarial filtering method that iteratively trains an ensemble of stylistic classifiers and uses them to filter out the questions. Sakaguchi et al. (2020) also propose filtering methods based on both machines and humans to alleviate dataset-specific and word-association biases. However, a major issue is the inability to discern knowledge from bias in a closed domain. When the domain is equal to a dataset, patterns that are valid only in the domain are called dataset-specific biases (or annotation artifacts in the labeled data). When the domain covers larger corpora, the patterns (e.g., frequency) are called word-association biases. When the domain includes everyday experience, patterns are called commonsense. However, as mentioned in Section 5.1, commonsense knowledge can be defeasible, which implies that the knowledge can be false in unusual situations. In contrast, when the domain is our real world, indefeasible patterns are called factual knowledge. Therefore, the distinction of bias and knowledge depends on where the pattern is recognized. This means that a dataset should be created so that it can evaluate reasoning on the intended knowledge. For example, to test defeasible reasoning, we must filter out questions that are solvable by usual commonsense only. If we want to investigate the reading comprehension ability without depending on factual knowledge, we can consider counterfactual or fictional situations.
Removing Unintended Biases by Filtering
Identifying Requisite Skills by Ablating Input
Features Another approach is to verify shortcutproof questions by analyzing the human answerability of questions regarding their key features. We speculate that if a question is still answerable by humans even after removing the intended features, the question does not require understanding of the ablated features (e.g., checking the necessity of resolving pronoun coreference after replacing pronouns with dummy nouns). Even if we cannot accurately identify such necessary features, by identifying partial features of them in a sufficient number of questions, we could expect that the questions evaluate the corresponding intended skill. In a similar vein, Geirhos et al. (2020) argue that a dataset is useful only if it is a good proxy for the underlying ability one is actually interested in.
Explanation by Design
Another approach for ensuring the substantive validity is to include explicit explanation in the task formulation. Although gathering human explanations is costly, the following approaches can facilitate the explicit verification of a model's understanding using a few test examples.
Generating Introspective Explanation Inoue et al. (2020) classify two types of explanation in text comprehension: justification explanation and introspective explanation. The justification explanation only provides a collection of supporting facts for making a certain decision, whereas the introspective explanation provides the derivation of the answer for making the decision, which can cover linguistic phenomena and commonsense knowledge not explicitly mentioned in the text. They annotate multi-hop reasoning questions with introspective explanation and propose a task that requires the derivation of the correct answer of a given question to improve the explainability. Rajani et al. (2019) collect human explanations for commonsense reasoning and improve the system's performance by modeling the generation of the explanation. Although we must take into account the faithfulness of explanation, asking for introspective explanations could be useful in inspecting the internal reasoning process, e.g., by extending the task formulation so that it includes auxiliary questions that consider the intermediate facts in a reasoning process. For example, before answering Q2 in Figure 2, a reader should be able to answer who escaped? and where did she escape from? at the surface-structure level.
Creating Dependency Between Questions Another approach for improving the substantive validity is to create dependency between questions by which answering them correctly involves answering some other questions correctly. For example, Dalvi et al. (2018) propose a dataset that requires a procedural understanding of scientific facts.
In their dataset, a set of questions corresponds to the steps of the entire process of a scientific phenomenon. Therefore, this set can be viewed as a single question that requires a complete understanding of the scientific phenomenon. In CoQA (Reddy et al., 2019), it is noted that questions often have pronouns that refer back to nouns appearing in previous questions. These mutually-dependent questions can probably facilitate the explicit validation of the models' understanding of given texts.
Conclusion
In this paper, we outlined current issues and future directions for benchmarking machine reading comprehension. We visited the psychology study to analyze what we should ask of reading comprehension and the construct validity in psychometrics to analyze how we should correctly evaluate it. We deduced that future datasets should evaluate the capability of the situation model for understanding context-dependent situations and for grounding to non-textual information and ensure the substantive validity by creating shortcut-proof questions and designing an explanatory task formulation.
For scientific question answering, Jansen et al. (2016) categorize knowledge and inference for an elementary-level dataset. Similarly, Boratko et al. (2018) propose types of knowledge and reasoning for scientific questions in
Figure 1 :
1Representation levels and corresponding natural language understanding skills.(Ribeiro et al., 2020)) when the performance of a model is being assessed.
Q1 :Figure 2 :
Q12Who climbed out of the castle? A: Princess Q2: Where did the princess wander after escaping? A: Forest Q3: What would happen if her mother was not sleeping? A: the princess would be caught soon (multiple choice) Example questions of the different representation levels. The passage is taken from MCTest.
( 2017 )
2017propose a dataset based on science textbooks, which contain questions with passages, diagrams, and images.Kahou et al. (2018) propose a figure-based question answering dataset that requires the understanding of figures including line plots and bar charts. Although another approach could be vision-based question answering tasks(Antol et al., 2015;Zellers et al., 2019a), we cannot directly use them for evaluating NLU because they focus on understanding of images rather than texts. Similarly to the textbook questions(Kembhavi et al., 2017), a possible approach would be to create questions for understanding of texts through showing figures.
Table 1 :
1Overview of theoretical foundations, requirements, and future directions of MRC discussed in this paper.
Table 2 :
2Aspects of the construct validity in psychometrics and corresponding features in MRC.may be useful if such ambiguity can be reflected
in the evaluation (e.g., using the item response the-
ory (Lalor et al., 2016)). As for model predic-
tions, an issue may be the reproducibility of results
(Bouthillier et al., 2019), which implies that the
reimplementation of a system generates statistically
similar predictions. For the reproducibility of mod-
els, Dror et al. (2018) emphasize statistical testing
methods to evaluate models. For the reproducibil-
ity of findings, Bouthillier et al. (2019) stress it as
the transferability of findings in a dataset/task to
another dataset/task. In open-domain question an-
swering, Lewis et al. (2021) point out that success-
ful models might only memorize dataset-specific
knowledge. To facilitate this transferability, we
need to have units of explanation that can be used
in different datasets (Doshi-Velez and Kim, 2018).
External Aspect This aspect refers to the rela-
tionship between a model's scores on different
tasks. Yogatama et al. (2019) point out that current
models struggle to transfer their ability from a task
originally trained on (e.g., MRC) to different un-
seen tasks (e.g., SRL). To develop a general NLU
model, one would expect that a successful MRC
model should show sufficient performance on other
NLU tasks as well. To this end, Wang et al. (2019)
propose an evaluation framework with ten different
NLU tasks in the same format.
Consequential Aspect This aspect refers to the
actual and potential consequences of test use. In
MRC, this refers to the use of a successful model
in practical situations other than tasks, where we
need to ensure the robustness of a model to adver-
sarial attacks and the accountability for unintended
model behaviors. Wallace et al. (2019) highlight
this aspect by showing that existing NLP models
are vulnerable to adversarial examples, thereby gen-
erating egregious outputs.
Naoya Inoue, Pontus Stenetorp, and Kentaro Inui. 2020. R4C: A benchmark for evaluating RC systems to get the right answer for the right reason. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 6740-6750, Online. Association for Computational Linguistics. Alon Jacovi and Yoav Goldberg. 2020. Towards faithfully interpretable NLP systems: How should we define and evaluate faithfulness? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4198-4205, Online. Association for Computational Linguistics. Peter Jansen, Niranjan Balasubramanian, Mihai Surdeanu, and Peter Clark. 2016. What's in an explanation? characterizing knowledge and inference requirements for elementary science exams. In Proceedings of COLING 2016, the 26th International Conference on Computational Linguistics: Technical Papers, pages 2956-2965, Osaka, Japan. The COLING 2016 Organizing Committee. Robin Jia and Percy Liang. 2017. Adversarial examples for evaluating reading comprehension systems. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 2011-2021. Association for Computational Linguistics. Qiao Jin, Bhuwan Dhingra, Zhengping Liu, William Cohen, and Xinghua Lu. 2019. PubMedQA: A dataset for biomedical research question answering. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2567-2577, Hong Kong, China. Association for Computational Linguistics. Mandar Joshi, Eunsol Choi, Daniel Weld, and Luke Zettlemoyer. 2017. TriviaQA: A large scale distantly supervised challenge dataset for reading comprehension. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1601-1611. Association for Computational Linguistics. Samira Ebrahimi Kahou, Adam Atkinson, Vincent Michalski,Ákos Kádár, Adam Trischler, and Yoshua Bengio. 2018. FigureQA: An annotated figure dataset for visual reasoning. In International Conference on Learning Representations Workshop Track. Divyansh Kaushik and Zachary C. Lipton. 2018. How much reading does reading comprehension require? a critical investigation of popular benchmarks. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 5010-5015. Association for Computational Linguistics. Aniruddha Kembhavi, Minjoon Seo, Dustin Schwenk, Jonghyun Choi, Ali Farhadi, and Hannaneh Hajishirzi. 2017. Are you smarter than a sixth grader? textbook question answering for multimodal machine comprehension. In the IEEE Conference on Computer Vision and Pattern Recognition. Daniel Khashabi, Snigdha Chaturvedi, Michael Roth, Shyam Upadhyay, and Dan Roth. 2018. Looking beyond the surface: A challenge set for reading comprehension over multiple sentences. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 252-262. Association for Computational Linguistics. Daniel Khashabi, Sewon Min, Tushar Khot, Ashish Sabharwal, Oyvind Tafjord, Peter Clark, and Hannaneh Hajishirzi. 2020. UnifiedQA: Crossing format boundaries with a single QA system. In Findings of the Association for Computational Linguistics: EMNLP 2020, pages 1896-1907, Online. Association for Computational Linguistics. Tushar Khot, Peter Clark, Michal Guerquin, Peter Jansen, and Ashish Sabharwal. 2020. QASC: A dataset for question answering via sentence composition. In The Thirty-Fourth AAAI Conference on Artificial Intelligence, AAAI 2020, The Thirty-Second Innovative Applications of Artificial Intelligence Conference, IAAI 2020, The Tenth AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2020, New York, NY, USA, February 7-12, 2020, pages 8082-8090. AAAI Press. Walter Kintsch. 1988. The role of knowledge in discourse comprehension: A construction-integration model. Psychological review, 95(2):163. Walter Kintsch and Katherine A Rawson. 2005. Comprehension. The Science of Reading: A Handbook, pages 211-226. Tomáš Kočiský, Jonathan Schwarz, Phil Blunsom, Chris Dyer, Karl Moritz Hermann, Gábor Melis, and Edward Grefenstette. 2018. The NarrativeQA reading comprehension challenge. Transactions of the Association for Computational Linguistics, 6:317-328. Tom Kwiatkowski, Jennimaria Palomaki, Olivia Redfield, Michael Collins, Ankur Parikh, Chris Alberti, Danielle Epstein, Illia Polosukhin, Jacob Devlin, Kenton Lee, Kristina Toutanova, Llion Jones, Matthew Kelcey, Ming-Wei Chang, Andrew M. Dai, Jakob Uszkoreit, Quoc Le, and Slav Petrov. 2019. Natural questions: A benchmark for question answering research. Transactions of the Association for Computational Linguistics, 7:453-466. Igor Labutov, Bishan Yang, Anusha Prakash, and Amos Azaria. 2018. Multi-relational question answering from narratives: Machine reading and reasoning in simulated worlds. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 833-844. Association for Computational Linguistics. Guokun Lai, Qizhe Xie, Hanxiao Liu, Yiming Yang, and Eduard Hovy. 2017. RACE: Large-scale reading comprehension dataset from examinations. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 796-805. Association for Computational Linguistics. John Lalor, Hao Wu, and Hong Yu. 2016. Building an evaluation scale using item response theory. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 648-657, Austin, Texas. Association for Computational Linguistics. Hector J. Levesque. 2014. On our best behaviour. Artificial Intelligence, 212:27 -35. Patrick Lewis, Pontus Stenetorp, and Sebastian Riedel. 2021. Question and answer test-train overlap in open-domain question answering datasets. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics, Online. Association for Computational Linguistics. Kevin Lin, Oyvind Tafjord, Peter Clark, and Matt Gardner. 2019. Reasoning over paragraph effects in situations. In Proceedings of the 2nd Workshop on Machine Reading for Question Answering, pages 58-62, Hong Kong, China. Association for Computational Linguistics. Tal Linzen. 2020. How can we accelerate progress towards human-like linguistic generalization? In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 5210-5217, Online. Association for Computational Linguistics. Jian Liu, Leyang Cui, Hanmeng Liu, Dandan Huang, Yile Wang, and Yue Zhang. 2020. Logiqa: A challenge dataset for machine reading comprehension with logical reasoning. In Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence, IJCAI-20, pages 3622-3628. International Joint Conferences on Artificial Intelligence Organization. Main track. Nelson F. Liu, Matt Gardner, Yonatan Belinkov, Matthew E. Peters, and Noah A. Smith. 2019. Linguistic knowledge and transferability of contextual representations. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 1073-1094, Minneapolis, Minnesota. Association for Computational Linguistics. Peter LoBue and Alexander Yates. 2011. Types of common-sense knowledge needed for recognizing textual entailment. In Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies, pages 329-334, Portland, Oregon, USA. Association for Computational Linguistics. Kaixin Ma, Tomasz Jurczyk, and Jinho D. Choi. 2018. Challenging reading comprehension on daily conversation: Passage completion on multiparty dialog. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 2039-2048, New Orleans, Louisiana. Association for Computational Linguistics. Christopher D. Manning. 2006. Local textual inference: It's hard to circumscribe, but you know it when you see it-and NLP needs it. Unpublished manuscript. David Marr. 1982. Vision: A computational investigation into the human representation and processing of visual information. New York: Freeman. Tom McCoy, Ellie Pavlick, and Tal Linzen. 2019. Right for the wrong reasons: Diagnosing syntactic heuristics in natural language inference. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 3428-3448, Florence, Italy. Association for Computational Linguistics. Danielle S McNamara and Joe Magliano. 2009. Toward a comprehensive model of comprehension. Psychology of learning and motivation, 51:297-384. Samuel Messick. 1995. Validity of psychological assessment: Validation of inferences from persons' responses and performances as scientific inquiry into score meaning. American psychologist, 50(9):741. Todor Mihaylov, Peter Clark, Tushar Khot, and Ashish Sabharwal. 2018. Can a suit of armor conduct electricity? a new dataset for open book question answering. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 2381-2391, Brussels, Belgium. Association for Computational Linguistics. Sewon Min, Eric Wallace, Sameer Singh, Matt Gardner, Hannaneh Hajishirzi, and Luke Zettlemoyer. 2019. Compositional questions do not necessitate multi-hop reasoning. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, pages 4249-4257, Florence, Italy. Association for Computational Linguistics. Sewon Min, Victor Zhong, Richard Socher, and Caiming Xiong. 2018. Efficient and robust question answering from minimal context over documents. In Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1725-1735. Association for Computational Linguistics. Nasrin Mostafazadeh, Michael Roth, Annie Louis, Nathanael Chambers, and James Allen. 2017. LS-DSem 2017 shared task: The story cloze test. In Proceedings of the 2nd Workshop on Linking Models of Lexical, Sentential and Discourse-level Semantics, pages 46-51. Association for Computational Linguistics. Pramod Kaushik Mudrakarta, Ankur Taly, Mukund Sundararajan, and Kedar Dhamdhere. 2018. Did the model understand the question? In Proceedings of the 56th Annual Meeting of the AssociationMax Bartolo, Alastair Roberts, Johannes Welbl, Sebas-
tian Riedel, and Pontus Stenetorp. 2020. Beat the
AI: Investigating adversarial human annotation for
reading comprehension. Transactions of the Associ-
ation for Computational Linguistics, 8:662-678.
Regina Barzilay and Mirella Lapata. 2008. Modeling
local coherence: An entity-based approach. Compu-
tational Linguistics, 34(1):1-34.
Emily M. Bender and Batya Friedman. 2018. Data
statements for natural language processing: Toward
mitigating system bias and enabling better science.
Transactions of the Association for Computational
Linguistics, 6:587-604.
Emily M. Bender and Alexander Koller. 2020. Climb-
ing towards NLU: On meaning, form, and under-
standing in the age of data. In Proceedings of the
58th Annual Meeting of the Association for Compu-
tational Linguistics, pages 5185-5198, Online. As-
sociation for Computational Linguistics.
Luisa Bentivogli, Elena Cabrio, Ido Dagan, Danilo
Giampiccolo, Medea Lo Leggio, and Bernardo
Magnini. 2010. Building textual entailment special-
ized data sets: a methodology for isolating linguis-
tic phenomena relevant to inference. In Proceed-
ings of the Seventh International Conference on Lan-
guage Resources and Evaluation (LREC'10), Val-
letta, Malta. European Language Resources Associ-
ation (ELRA).
Chandra Bhagavatula, Ronan Le Bras, Chaitanya
Malaviya, Keisuke Sakaguchi, Ari Holtzman, Han-
nah Rashkin, Doug Downey, Wen tau Yih, and Yejin
Choi. 2020. Abductive commonsense reasoning. In
International Conference on Learning Representa-
tions.
Yonatan Bisk, Ari Holtzman, Jesse Thomason, Jacob
Andreas, Yoshua Bengio, Joyce Chai, Mirella Lap-
ata, Angeliki Lazaridou, Jonathan May, Aleksandr
Nisnevich, Nicolas Pinto, and Joseph Turian. 2020.
Experience grounds language. In Proceedings of the
2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 8718-8735,
Online. Association for Computational Linguistics.
Michael Boratko, Xiang Li, Tim O'Gorman, Rajarshi
Das, Dan Le, and Andrew McCallum. 2020. Pro-
toQA: A question answering dataset for prototypi-
cal common-sense reasoning. In Proceedings of the
2020 Conference on Empirical Methods in Natural
Language Processing (EMNLP), pages 1122-1136,
Online. Association for Computational Linguistics.
Michael Boratko, Harshit Padigela, Divyendra Mikki-
lineni, Pritish Yuvraj, Rajarshi Das, Andrew McCal-
lum, Maria Chang, Achille Fokoue-Nkoutche, Pavan
Kapanipathi, Nicholas Mattei, Ryan Musa, Kartik
Talamadupula, and Michael Witbrock. 2018. A sys-
tematic classification of knowledge, reasoning, and
context within the ARC dataset. In Proceedings
of the Workshop on Machine Reading for Question
Answering, pages 60-70. Association for Computa-
tional Linguistics.
Xavier Bouthillier, César Laurent, and Pascal Vincent.
2019. Unreproducible research is reproducible. In
Proceedings of the 36th International Conference
on Machine Learning, volume 97 of Proceedings of
Machine Learning Research, pages 725-734, Long
Beach, California, USA. PMLR.
Samuel R. Bowman, Gabor Angeli, Christopher Potts,
and Christopher D. Manning. 2015. A large an-
notated corpus for learning natural language infer-
ence. In Proceedings of the 2015 Conference on
Empirical Methods in Natural Language Processing,
pages 632-642. Association for Computational Lin-
guistics.
Christopher J.C. Burges. 2013. Towards the machine
comprehension of text: An essay. Technical re-
port, Microsoft Research Technical Report MSR-
TR-2013-125.
Vittorio Castelli, Rishav Chakravarti, Saswati Dana,
Anthony Ferritto, Radu Florian, Martin Franz, Di-
nesh Garg, Dinesh Khandelwal, Scott McCarley,
Michael McCawley, Mohamed Nasr, Lin Pan, Cezar
Pendus, John Pitrelli, Saurabh Pujar, Salim Roukos,
Andrzej Sakrajda, Avi Sil, Rosario Uceda-Sosa,
Todd Ward, and Rong Zhang. 2020. The TechQA
dataset. In Proceedings of the 58th Annual Meet-
ing of the Association for Computational Linguistics,
pages 1269-1278, Online. Association for Computa-
tional Linguistics.
Danqi Chen. 2018. Neural Reading Comprehension
and Beyond. Ph.D. thesis, Stanford University.
Danqi Chen, Adam Fisch, Jason Weston, and Antoine
Bordes. 2017. Reading wikipedia to answer open-
domain questions. In Proceedings of the 55th An-
nual Meeting of the Association for Computational
Linguistics (Volume 1: Long Papers), pages 1870-
1879. Association for Computational Linguistics.
Jifan Chen and Greg Durrett. 2019. Understanding
dataset design choices for multi-hop reasoning. In
Proceedings of the 2019 Conference of the North
American Chapter of the Association for Compu-
tational Linguistics: Human Language Technolo-
gies, Volume 1 (Long and Short Papers), pages
4026-4032, Minneapolis, Minnesota. Association
for Computational Linguistics.
Michael Chen, Mike D'Arcy, Alisa Liu, Jared Fer-
nandez, and Doug Downey. 2019. CODAH: An
adversarially-authored question answering dataset
for common sense. In Proceedings of the 3rd Work-
shop on Evaluating Vector Space Representations
for NLP, pages 63-69, Minneapolis, USA. Associ-
ation for Computational Linguistics.
Wenhu Chen, Hanwen Zha, Zhiyu Chen, Wenhan
Xiong, Hong Wang, and William Yang Wang. 2020.
HybridQA: A dataset of multi-hop question answer-
ing over tabular and textual data. In Findings of the
Association for Computational Linguistics: EMNLP
2020, pages 1026-1036, Online. Association for
Computational Linguistics.
Eunsol Choi, He He, Mohit Iyyer, Mark Yatskar, Wen-
tau Yih, Yejin Choi, Percy Liang, and Luke Zettle-
moyer. 2018. QuAC: Question answering in con-
text. In Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Processing,
pages 2174-2184, Brussels, Belgium. Association
for Computational Linguistics.
Christopher Clark, Kenton Lee, Ming-Wei Chang,
Tom Kwiatkowski, Michael Collins, and Kristina
Toutanova. 2019. BoolQ: Exploring the surprising
difficulty of natural yes/no questions. In Proceed-
ings of the 2019 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2924-2936, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Peter Clark, Isaac Cowhey, Oren Etzioni, Tushar Khot,
Ashish Sabharwal, Carissa Schoenick, and Oyvind
Tafjord. 2018. Think you have solved question an-
swering? Try ARC, the AI2 reasoning challenge.
CoRR, abs/1803.05457.
Ido Dagan, Oren Glickman, and Bernardo Magnini.
2006. The pascal recognising textual entailment
challenge. In Machine Learning Challenges Work-
shop, pages 177-190. Springer.
Bhavana Dalvi, Lifu Huang, Niket Tandon, Wen-tau
Yih, and Peter Clark. 2018. Tracking state changes
in procedural text: a challenge dataset and models
for process paragraph comprehension. In Proceed-
ings of the 2018 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume
1 (Long Papers), pages 1595-1604. Association for
Computational Linguistics.
Pradeep Dasigi, Nelson F. Liu, Ana Marasovic,
Noah A. Smith, and Matt Gardner. 2019. Quoref:
A reading comprehension dataset with questions re-
quiring coreferential reasoning. In Proceedings of
the 2019 Conference on Empirical Methods in Nat-
ural Language Processing and the 9th International
Joint Conference on Natural Language Processing
(EMNLP-IJCNLP), pages 5927-5934, Hong Kong,
China. Association for Computational Linguistics.
Ernest Davis and Gary Marcus. 2015. Commonsense
reasoning and commonsense knowledge in artificial
intelligence. Commun. ACM, 58(9):92-103.
Bhuwan Dhingra, Kathryn Mazaitis, and William W.
Cohen. 2017. Quasar: Datasets for question answer-
ing by search and reading.
Finale Doshi-Velez and Been Kim. 2018. Consider-
ations for Evaluation and Generalization in Inter-
pretable Machine Learning, 1st edition. Springer In-
ternational Publishing.
Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re-
ichart. 2018. The hitchhiker's guide to testing statis-
tical significance in natural language processing. In
Proceedings of the 56th Annual Meeting of the As-
sociation for Computational Linguistics (Volume 1:
Long Papers), pages 1383-1392, Melbourne, Aus-
tralia. Association for Computational Linguistics.
Dheeru Dua, Yizhong Wang, Pradeep Dasigi, Gabriel
Stanovsky, Sameer Singh, and Matt Gardner. 2019.
DROP: A reading comprehension benchmark requir-
ing discrete reasoning over paragraphs. In Proceed-
ings of the 2019 Conference of the North American
Chapter of the Association for Computational Lin-
guistics: Human Language Technologies, Volume 1
(Long and Short Papers), pages 2368-2378, Min-
neapolis, Minnesota. Association for Computational
Linguistics.
Jesse Dunietz, Greg Burnham, Akash Bharadwaj,
Owen Rambow, Jennifer Chu-Carroll, and Dave Fer-
rucci. 2020. To test machine comprehension, start
by defining comprehension. In Proceedings of the
58th Annual Meeting of the Association for Compu-
tational Linguistics, pages 7839-7859, Online. As-
sociation for Computational Linguistics.
Matthew Dunn, Levent Sagun, Mike Higgins, V. Ugur
Guney, Volkan Cirik, and Kyunghyun Cho. 2017.
SearchQA: A new Q&A dataset augmented with
context from a search engine.
Shi Feng, Eric Wallace, Alvin Grissom II, Mohit Iyyer,
Pedro Rodriguez, and Jordan Boyd-Graber. 2018.
Pathologies of neural models make interpretations
difficult. In Proceedings of the 2018 Conference on
Empirical Methods in Natural Language Processing,
pages 3719-3728. Association for Computational
Linguistics.
James Ferguson, Matt Gardner, Hannaneh Hajishirzi,
Tushar Khot, and Pradeep Dasigi. 2020. IIRC: A
dataset of incomplete information reading compre-
hension questions. In Proceedings of the 2020 Con-
ference on Empirical Methods in Natural Language
Processing (EMNLP), pages 1137-1147, Online. As-
sociation for Computational Linguistics.
Adam Fisch, Alon Talmor, Robin Jia, Minjoon Seo, Eu-
nsol Choi, and Danqi Chen. 2019. MRQA 2019
shared task: Evaluating generalization in reading
comprehension. In Proceedings of the 2nd Work-
shop on Machine Reading for Question Answering,
pages 1-13, Hong Kong, China. Association for
Computational Linguistics.
R Michael Furr. 2018. Psychometrics: an introduction.
Sage Publications.
Matt Gardner, Yoav Artzi, Victoria Basmov, Jonathan
Berant, Ben Bogin, Sihao Chen, Pradeep Dasigi,
Dheeru Dua, Yanai Elazar, Ananth Gottumukkala,
Nitish Gupta, Hannaneh Hajishirzi, Gabriel Ilharco,
Daniel Khashabi, Kevin Lin, Jiangming Liu, Nel-
son F. Liu, Phoebe Mulcaire, Qiang Ning, Sameer
Singh, Noah A. Smith, Sanjay Subramanian, Reut
Tsarfaty, Eric Wallace, Ally Zhang, and Ben Zhou.
2020. Evaluating models' local decision boundaries
via contrast sets. In Findings of the Association
for Computational Linguistics: EMNLP 2020, pages
1307-1323, Online. Association for Computational
Linguistics.
Matt Gardner, Jonathan Berant, Hannaneh Hajishirzi,
Alon Talmor, and Sewon Min. 2019. On making
reading comprehension more comprehensive. In
Proceedings of the 2nd Workshop on Machine Read-
ing for Question Answering, pages 105-112, Hong
Kong, China. Association for Computational Lin-
guistics.
Timnit Gebru, Jamie Morgenstern, Briana Vec-
chione, Jennifer Wortman Vaughan, Hanna Wal-
lach, Hal Daumé III, and Kate Crawford. 2018.
Datasheets for datasets. ArXiv preprint 1803.09010.
Robert Geirhos, Jörn-Henrik Jacobsen, Claudio
Michaelis, Richard Zemel, Wieland Brendel,
Matthias Bethge, and Felix A. Wichmann. 2020.
Shortcut learning in deep neural networks. Nature
Machine Intelligence, 2(11):665-673.
Leilani H Gilpin, David Bau, Ben Z Yuan, Ayesha Ba-
jwa, Michael Specter, and Lalana Kagal. 2018. Ex-
plaining explanations: An overview of interpretabil-
ity of machine learning. In 2018 IEEE 5th Interna-
tional Conference on data science and advanced an-
alytics (DSAA), pages 80-89. IEEE.
Arthur C. Graesser, Murray Singer, and Tom Trabasso.
1994. Constructing inferences during narrative text
comprehension. Psychological review, 101(3):371.
Suchin Gururangan, Swabha Swayamdipta, Omer
Levy, Roy Schwartz, Samuel Bowman, and Noah A.
Smith. 2018. Annotation artifacts in natural lan-
guage inference data. In Proceedings of the 2018
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, Volume 2 (Short Papers),
pages 107-112. Association for Computational Lin-
guistics.
Ivan Habernal, Henning Wachsmuth, Iryna Gurevych,
and Benno Stein. 2018. The argument reasoning
comprehension task: Identification and reconstruc-
tion of implicit warrants. In Proceedings of the 2018
Conference of the North American Chapter of the
Association for Computational Linguistics: Human
Language Technologies, Volume 1 (Long Papers),
pages 1930-1940, New Orleans, Louisiana. Associ-
ation for Computational Linguistics.
Wei He, Kai Liu, Jing Liu, Yajuan Lyu, Shiqi Zhao,
Xinyan Xiao, Yuan Liu, Yizhong Wang, Hua Wu,
Qiaoqiao She, Xuan Liu, Tian Wu, and Haifeng
Wang. 2018. DuReader: a Chinese machine read-
ing comprehension dataset from real-world appli-
cations. In Proceedings of the Workshop on Ma-
chine Reading for Question Answering, pages 37-
46, Melbourne, Australia. Association for Computa-
tional Linguistics.
Karl Moritz Hermann, Tomas Kocisky, Edward Grefen-
stette, Lasse Espeholt, Will Kay, Mustafa Suleyman,
and Phil Blunsom. 2015. Teaching machines to read
and comprehend. In C. Cortes, N. D. Lawrence,
D. D. Lee, M. Sugiyama, and R. Garnett, editors,
Advances in Neural Information Processing Systems
28, pages 1693-1701. Curran Associates, Inc.
José Hernández-Orallo. 2017. The measure of all
minds: evaluating natural and artificial intelligence.
Cambridge University Press.
Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia
Polosukhin, Andrew Fandrianto, Jay Han, Matthew
Kelcey, and David Berthelot. 2016. WikiReading: A
novel large-scale language understanding task over
wikipedia. In Proceedings of the 54th Annual Meet-
ing of the Association for Computational Linguistics
(Volume 1: Long Papers), pages 1535-1545, Berlin,
Germany. Association for Computational Linguis-
tics.
Felix Hill, Antoine Bordes, Sumit Chopra, and Jason
Weston. 2016. The goldilocks principle: Reading
children's books with explicit memory representa-
tions. In International Conference on Learning Rep-
resentations.
Xanh Ho, Anh-Khoa Duong Nguyen, Saku Sugawara,
and Akiko Aizawa. 2020. Constructing a multi-
hop QA dataset for comprehensive evaluation of
reasoning steps. In Proceedings of the 28th Inter-
national Conference on Computational Linguistics,
pages 6609-6625, Barcelona, Spain (Online). Inter-
national Committee on Computational Linguistics.
Lifu Huang, Ronan Le Bras, Chandra Bhagavatula, and
Yejin Choi. 2019. Cosmos QA: Machine reading
comprehension with contextual commonsense rea-
soning. In Proceedings of the 2019 Conference on
Empirical Methods in Natural Language Processing
and the 9th International Joint Conference on Natu-
ral Language Processing (EMNLP-IJCNLP), pages
2391-2401, Hong Kong, China. Association for
Computational Linguistics.
In psychology, a construct is an abstract concept, which facilitates the understanding of human behavior such as vocabulary, skills, and comprehension.
AcknowledgmentsThe authors would like to thank Xanh Ho for helping create the dataset list and the anonymous reviewers for their insightful comments. This work was supported by JSPS KAKENHI Grant Number 18H03297, JST ACT-X Grant Number JPM-JAX190G, and JST PRESTO Grant Number JP-MJPR20C4.Table 6: Machine reading comprehension datasets published in 2020. In the answer style column, descript represents description (free-form answering) and extract denotes answer extraction by selecting a span in given texts.Size indicates the size of the whole dataset including training, development, and test sets. In the question source column, crowd indicates questions written by crowdworkers and query indicates questions collected from searchengine queries.
VQA: Visual question answering. Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, Devi Parikh, Proceedings of the IEEE International Conference on Computer Vision. the IEEE International Conference on Computer VisionStanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Mar- garet Mitchell, Dhruv Batra, C. Lawrence Zitnick, and Devi Parikh. 2015. VQA: Visual question an- swering. In Proceedings of the IEEE International Conference on Computer Vision, pages 2425-2433.
Association for Computational Linguistics. 1for Computational Linguisticsfor Computational Linguistics (Volume 1: Long Pa- pers), pages 1896-1906. Association for Computa- tional Linguistics.
MS MARCO: A human generated machine reading comprehension dataset. Tri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, Li Deng, abs/1611.09268CoRRTri Nguyen, Mir Rosenberg, Xia Song, Jianfeng Gao, Saurabh Tiwary, Rangan Majumder, and Li Deng. 2016. MS MARCO: A human generated machine reading comprehension dataset. CoRR, abs/1611.09268.
TORQUE: A reading comprehension dataset of temporal ordering questions. Qiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, Dan Roth, 10.18653/v1/2020.emnlp-main.88Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP). the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP)Online. Association for Computational LinguisticsQiang Ning, Hao Wu, Rujun Han, Nanyun Peng, Matt Gardner, and Dan Roth. 2020. TORQUE: A reading comprehension dataset of temporal ordering ques- tions. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Process- ing (EMNLP), pages 1158-1172, Online. Associa- tion for Computational Linguistics.
Who did What: A large-scale person-centered cloze dataset. Takeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gimpel, David Mcallester, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsTakeshi Onishi, Hai Wang, Mohit Bansal, Kevin Gim- pel, and David McAllester. 2016. Who did What: A large-scale person-centered cloze dataset. In Pro- ceedings of the 2016 Conference on Empirical Meth- ods in Natural Language Processing, pages 2230- 2235. Association for Computational Linguistics.
MCScript: A novel dataset for assessing machine comprehension using script knowledge. Simon Ostermann, Ashutosh Modi, Michael Roth, Stefan Thater, Manfred Pinkal, Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC. the Eleventh International Conference on Language Resources and Evaluation (LRECEuropean Language Resources Association (ELRASimon Ostermann, Ashutosh Modi, Michael Roth, Ste- fan Thater, and Manfred Pinkal. 2018. MCScript: A novel dataset for assessing machine comprehen- sion using script knowledge. In Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC 2018). European Language Resources Association (ELRA).
MCScript2.0: A machine comprehension corpus focused on script events and participants. Simon Ostermann, Michael Roth, Manfred Pinkal, 10.18653/v1/S19-1012Proceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019). the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019)Minneapolis, MinnesotaAssociation for Computational LinguisticsSimon Ostermann, Michael Roth, and Manfred Pinkal. 2019. MCScript2.0: A machine comprehension cor- pus focused on script events and participants. In Pro- ceedings of the Eighth Joint Conference on Lexical and Computational Semantics (*SEM 2019), pages 103-117, Minneapolis, Minnesota. Association for Computational Linguistics.
emrQA: A large corpus for question answering on electronic medical records. Anusri Pampari, Preethi Raghavan, Jennifer Liang, Jian Peng, 10.18653/v1/D18-1258Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsAnusri Pampari, Preethi Raghavan, Jennifer Liang, and Jian Peng. 2018. emrQA: A large corpus for question answering on electronic medical records. In Proceedings of the 2018 Conference on Em- pirical Methods in Natural Language Processing, pages 2357-2368, Brussels, Belgium. Association for Computational Linguistics.
The LAMBADA dataset: Word prediction requiring a broad discourse context. Denis Paperno, Germán Kruszewski, Angeliki Lazaridou, Ngoc Quan Pham, Raffaella Bernardi, Sandro Pezzelle, Marco Baroni, Gemma Boleda, Raquel Fernández, 10.18653/v1/P16-1144Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyAssociation for Computational Linguistics1Denis Paperno, Germán Kruszewski, Angeliki Lazari- dou, Ngoc Quan Pham, Raffaella Bernardi, San- dro Pezzelle, Marco Baroni, Gemma Boleda, and Raquel Fernández. 2016. The LAMBADA dataset: Word prediction requiring a broad discourse context. In Proceedings of the 54th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1525-1534, Berlin, Germany. Association for Computational Linguistics.
Counterfactual story reasoning and generation. Lianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, Yejin Choi, 10.18653/v1/D19-1509Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsLianhui Qin, Antoine Bosselut, Ari Holtzman, Chandra Bhagavatula, Elizabeth Clark, and Yejin Choi. 2019. Counterfactual story reasoning and generation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 5043- 5053, Hong Kong, China. Association for Computa- tional Linguistics.
Explain yourself! leveraging language models for commonsense reasoning. Bryan Nazneen Fatema Rajani, Caiming Mccann, Richard Xiong, Socher, 10.18653/v1/P19-1487Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsNazneen Fatema Rajani, Bryan McCann, Caiming Xiong, and Richard Socher. 2019. Explain yourself! leveraging language models for commonsense rea- soning. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguis- tics, pages 4932-4942, Florence, Italy. Association for Computational Linguistics.
Know what you don't know: Unanswerable questions for SQuAD. Pranav Rajpurkar, Robin Jia, Percy Liang, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics2Short Papers)Pranav Rajpurkar, Robin Jia, and Percy Liang. 2018. Know what you don't know: Unanswerable ques- tions for SQuAD. In Proceedings of the 56th Annual Meeting of the Association for Computational Lin- guistics (Volume 2: Short Papers), pages 784-789. Association for Computational Linguistics.
SQuAD: 100,000+ questions for machine comprehension of text. Pranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, Percy Liang, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsPranav Rajpurkar, Jian Zhang, Konstantin Lopyrev, and Percy Liang. 2016. SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Nat- ural Language Processing, pages 2383-2392. Asso- ciation for Computational Linguistics.
CoQA: A conversational question answering challenge. Siva Reddy, Danqi Chen, Christopher D Manning, 10.1162/tacl_a_00266Transactions of the Association for Computational Linguistics. 7Siva Reddy, Danqi Chen, and Christopher D. Manning. 2019. CoQA: A conversational question answering challenge. Transactions of the Association for Com- putational Linguistics, 7:249-266.
Beyond accuracy: Behavioral testing of NLP models with CheckList. Tongshuang Marco Tulio Ribeiro, Carlos Wu, Sameer Guestrin, Singh, 10.18653/v1/2020.acl-main.442Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsMarco Tulio Ribeiro, Tongshuang Wu, Carlos Guestrin, and Sameer Singh. 2020. Beyond accuracy: Be- havioral testing of NLP models with CheckList. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4902- 4912, Online. Association for Computational Lin- guistics.
MCTest: A challenge dataset for the open-domain machine comprehension of text. Matthew Richardson, J C Christopher, Erin Burges, Renshaw, Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing. the 2013 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsMatthew Richardson, Christopher J.C. Burges, and Erin Renshaw. 2013. MCTest: A challenge dataset for the open-domain machine comprehension of text. In Proceedings of the 2013 Conference on Empiri- cal Methods in Natural Language Processing, pages 193-203. Association for Computational Linguis- tics.
Getting closer to AI complete question answering: A set of prerequisite real tasks. A Rogers, M Kovaleva, A Downey, Rumshisky, Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence. the Thirty-Fourth AAAI Conference on Artificial IntelligenceA Rogers, O Kovaleva, M Downey, and A Rumshisky. 2020. Getting closer to AI complete question an- swering: A set of prerequisite real tasks. In Pro- ceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 8722-8731.
Interpretation of natural language rules in conversational machine reading. Marzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, Sebastian Riedel, 10.18653/v1/D18-1233Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsMarzieh Saeidi, Max Bartolo, Patrick Lewis, Sameer Singh, Tim Rocktäschel, Mike Sheldon, Guillaume Bouchard, and Sebastian Riedel. 2018. Interpreta- tion of natural language rules in conversational ma- chine reading. In Proceedings of the 2018 Confer- ence on Empirical Methods in Natural Language Processing, pages 2087-2097, Brussels, Belgium. Association for Computational Linguistics.
DuoRC: Towards complex language understanding with paraphrased reading comprehension. Amrita Saha, Rahul Aralikatte, M Mitesh, Karthik Khapra, Sankaranarayanan, Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsLong Papers1Association for Computational LinguisticsAmrita Saha, Rahul Aralikatte, Mitesh M. Khapra, and Karthik Sankaranarayanan. 2018. DuoRC: Towards complex language understanding with paraphrased reading comprehension. In Proceedings of the 56th Annual Meeting of the Association for Computa- tional Linguistics (Volume 1: Long Papers), pages 1683-1693. Association for Computational Linguis- tics.
WINOGRANDE: an adversarial winograd schema challenge at scale. Keisuke Sakaguchi, Le Ronan, Chandra Bras, Yejin Bhagavatula, Choi, Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence. the Thirty-Fourth AAAI Conference on Artificial IntelligenceKeisuke Sakaguchi, Ronan Le Bras, Chandra Bhaga- vatula, and Yejin Choi. 2020. WINOGRANDE: an adversarial winograd schema challenge at scale. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 8732-8430.
Ask not what textual entailment can do for you. Mark Sammons, V G Vinod Vydiswaran, Dan Roth , Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics. the 48th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsMark Sammons, V.G.Vinod Vydiswaran, and Dan Roth. 2010. "Ask not what textual entailment can do for you...". In Proceedings of the 48th Annual Meeting of the Association for Computational Lin- guistics, pages 1199-1208. Association for Compu- tational Linguistics.
Atomic: An atlas of machine commonsense for if-then reasoning. Maarten Sap, Emily Ronan Le Bras, Chandra Allaway, Nicholas Bhagavatula, Hannah Lourie, Brendan Rashkin, Roof, A Noah, Yejin Smith, Choi, Proceedings of the AAAI Conference on Artificial Intelligence. the AAAI Conference on Artificial Intelligence33Maarten Sap, Ronan Le Bras, Emily Allaway, Chan- dra Bhagavatula, Nicholas Lourie, Hannah Rashkin, Brendan Roof, Noah A Smith, and Yejin Choi. 2019a. Atomic: An atlas of machine commonsense for if-then reasoning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, pages 3027-3035.
Social IQa: Commonsense reasoning about social interactions. Maarten Sap, Hannah Rashkin, Derek Chen, Yejin Ronan Le Bras, Choi, 10.18653/v1/D19-1454Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsMaarten Sap, Hannah Rashkin, Derek Chen, Ronan Le Bras, and Yejin Choi. 2019b. Social IQa: Com- monsense reasoning about social interactions. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4463- 4473, Hong Kong, China. Association for Computa- tional Linguistics.
Scripts, plans, goals and understanding: An inquiry into human knowledge structures. C Roger, Robert P Schank, Abelson, Lawrence ErlbaumRoger C. Schank and Robert P. Abelson. 1977. Scripts, plans, goals and understanding: An inquiry into hu- man knowledge structures. Lawrence Erlbaum.
A framework for evaluation of machine reading comprehension gold standards. Viktor Schlegel, Marco Valentino, Andre Freitas, Goran Nenadic, Riza Batista-Navarro, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationViktor Schlegel, Marco Valentino, Andre Freitas, Goran Nenadic, and Riza Batista-Navarro. 2020. A framework for evaluation of machine reading com- prehension gold standards. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 5359-5369, Marseille, France. Euro- pean Language Resources Association.
What kinds of knowledge are needed for genuine understanding. K Lenhart, Schubert, IJCAI 2015 Workshop on Cognitive Knowledge Acquisition and Applications. Lenhart K Schubert. 2015. What kinds of knowledge are needed for genuine understanding? In IJCAI 2015 Workshop on Cognitive Knowledge Acquisition and Applications (Cognitum 2015).
What makes reading comprehension questions easier?. Saku Sugawara, Kentaro Inui, Satoshi Sekine, Akiko Aizawa, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsSaku Sugawara, Kentaro Inui, Satoshi Sekine, and Akiko Aizawa. 2018. What makes reading compre- hension questions easier? In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 4208-4219. Associa- tion for Computational Linguistics.
Evaluation metrics for machine reading comprehension: Prerequisite skills and readability. Saku Sugawara, Yusuke Kido, Hikaru Yokono, Akiko Aizawa, 10.18653/v1/P17-1075Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational Linguistics1Saku Sugawara, Yusuke Kido, Hikaru Yokono, and Akiko Aizawa. 2017. Evaluation metrics for ma- chine reading comprehension: Prerequisite skills and readability. In Proceedings of the 55th Annual Meeting of the Association for Computational Lin- guistics (Volume 1: Long Papers), pages 806-817. Association for Computational Linguistics.
Assessing the benchmarking capacity of machine reading comprehension datasets. Saku Sugawara, Pontus Stenetorp, Kentaro Inui, Akiko Aizawa, Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence. the Thirty-Fourth AAAI Conference on Artificial IntelligenceSaku Sugawara, Pontus Stenetorp, Kentaro Inui, and Akiko Aizawa. 2020. Assessing the benchmark- ing capacity of machine reading comprehension datasets. In Proceedings of the Thirty-Fourth AAAI Conference on Artificial Intelligence, pages 8918- 8927.
DREAM: A challenge data set and models for dialogue-based reading comprehension. Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, Claire Cardie, 10.1162/tacl_a_00264Transactions of the Association for Computational Linguistics. 7Kai Sun, Dian Yu, Jianshu Chen, Dong Yu, Yejin Choi, and Claire Cardie. 2019. DREAM: A challenge data set and models for dialogue-based reading compre- hension. Transactions of the Association for Com- putational Linguistics, 7:217-231.
CliCR: a dataset of clinical case reports for machine reading comprehension. Simon Suster, Walter Daelemans, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Association for Computational LinguisticsSimon Suster and Walter Daelemans. 2018. CliCR: a dataset of clinical case reports for machine reading comprehension. In Proceedings of the 2018 Con- ference of the North American Chapter of the Asso- ciation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers), pages 1551-1563. Association for Computational Linguis- tics.
Richard Sutcliffe, Anselmo Peñas, Eduard Hovy, Pamela Forner, Álvaro Rodrigo, Corina Forascu, Yassine Benajiba, and Petya Osenova. 2013. Overview of QA4MRE main task at CLEF 2013. Working Notes, CLEF. Richard Sutcliffe, Anselmo Peñas, Eduard Hovy, Pamela Forner,Álvaro Rodrigo, Corina Forascu, Yassine Benajiba, and Petya Osenova. 2013. Overview of QA4MRE main task at CLEF 2013. Working Notes, CLEF.
MultiQA: An empirical investigation of generalization and transfer in reading comprehension. Alon Talmor, Jonathan Berant, 10.18653/v1/P19-1485Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsAlon Talmor and Jonathan Berant. 2019. MultiQA: An empirical investigation of generalization and trans- fer in reading comprehension. In Proceedings of the 57th Annual Meeting of the Association for Com- putational Linguistics, pages 4911-4921, Florence, Italy. Association for Computational Linguistics.
CommonsenseQA: A question answering challenge targeting commonsense knowledge. Alon Talmor, Jonathan Herzig, Nicholas Lourie, Jonathan Berant, 10.18653/v1/N19-1421Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsAlon Talmor, Jonathan Herzig, Nicholas Lourie, and Jonathan Berant. 2019. CommonsenseQA: A ques- tion answering challenge targeting commonsense knowledge. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4149-4158, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
NewsQA: A machine comprehension dataset. Adam Trischler, Tong Wang, Xingdi Yuan, Justin Harris, Alessandro Sordoni, Philip Bachman, Kaheer Suleman, 10.18653/v1/W17-2623Proceedings of the 2nd Workshop on Representation Learning for NLP. the 2nd Workshop on Representation Learning for NLPAssociation for Computational LinguisticsAdam Trischler, Tong Wang, Xingdi Yuan, Justin Har- ris, Alessandro Sordoni, Philip Bachman, and Ka- heer Suleman. 2017. NewsQA: A machine compre- hension dataset. In Proceedings of the 2nd Work- shop on Representation Learning for NLP, pages 191-200. Association for Computational Linguis- tics.
Universal adversarial triggers for attacking and analyzing NLP. Eric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, Sameer Singh, 10.18653/v1/D19-1221Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)HongEric Wallace, Shi Feng, Nikhil Kandpal, Matt Gardner, and Sameer Singh. 2019. Universal adversarial trig- gers for attacking and analyzing NLP. In Proceed- ings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th Inter- national Joint Conference on Natural Language Pro- cessing (EMNLP-IJCNLP), pages 2153-2162, Hong
Association for Computational Linguistics. China Kong, Kong, China. Association for Computational Lin- guistics.
SuperGLUE: A stickier benchmark for general-purpose language understanding systems. Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, Samuel Bowman, Advances in Neural Information Processing Systems. Curran Associates, Inc32Alex Wang, Yada Pruksachatkun, Nikita Nangia, Amanpreet Singh, Julian Michael, Felix Hill, Omer Levy, and Samuel Bowman. 2019. SuperGLUE: A stickier benchmark for general-purpose language understanding systems. In Advances in Neural In- formation Processing Systems 32, pages 3261-3275. Curran Associates, Inc.
Constructing datasets for multi-hop reading comprehension across documents. Johannes Welbl, Pontus Stenetorp, Sebastian Riedel, Transactions of the Association for Computational Linguistics. 6Johannes Welbl, Pontus Stenetorp, and Sebastian Riedel. 2018. Constructing datasets for multi-hop reading comprehension across documents. Transac- tions of the Association for Computational Linguis- tics, 6:287-302.
Towards AI-complete question answering: a set of prerequisite toy tasks. Jason Weston, Antoine Bordes, Sumit Chopra, Tomas Mikolov, International Conference on Learning Representations. Jason Weston, Antoine Bordes, Sumit Chopra, and Tomas Mikolov. 2015. Towards AI-complete ques- tion answering: a set of prerequisite toy tasks. In International Conference on Learning Representa- tions.
Large-scale cloze test dataset created by teachers. Qizhe Xie, Guokun Lai, Zihang Dai, Eduard Hovy, 10.18653/v1/D18-1257Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsQizhe Xie, Guokun Lai, Zihang Dai, and Eduard Hovy. 2018. Large-scale cloze test dataset created by teachers. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Process- ing, pages 2344-2356, Brussels, Belgium. Associa- tion for Computational Linguistics.
RecipeQA: A challenge dataset for multimodal comprehension of cooking recipes. Semih Yagcioglu, Aykut Erdem, Erkut Erdem, Nazli Ikizler-Cinbis, 10.18653/v1/D18-1166Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingBrussels, BelgiumAssociation for Computational LinguisticsSemih Yagcioglu, Aykut Erdem, Erkut Erdem, and Na- zli Ikizler-Cinbis. 2018. RecipeQA: A challenge dataset for multimodal comprehension of cooking recipes. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 1358-1368, Brussels, Belgium. Association for Computational Linguistics.
HotpotQA: A dataset for diverse, explainable multi-hop question answering. Zhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, Christopher D Manning, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsZhilin Yang, Peng Qi, Saizheng Zhang, Yoshua Bengio, William Cohen, Ruslan Salakhutdinov, and Christo- pher D. Manning. 2018. HotpotQA: A dataset for diverse, explainable multi-hop question answering. In Proceedings of the 2018 Conference on Empiri- cal Methods in Natural Language Processing, pages 2369-2380. Association for Computational Linguis- tics.
Learning and evaluating general linguistic intelligence. Dani Yogatama, Jerome Cyprien De Masson D'autume, Tomás Connor, Mike Kociský, Lingpeng Chrzanowski, Angeliki Kong, Wang Lazaridou, Lei Ling, Chris Yu, Phil Dyer, Blunsom, CoRR, abs/1901.11373Dani Yogatama, Cyprien de Masson d'Autume, Jerome Connor, Tomás Kociský, Mike Chrzanowski, Ling- peng Kong, Angeliki Lazaridou, Wang Ling, Lei Yu, Chris Dyer, and Phil Blunsom. 2019. Learning and evaluating general linguistic intelligence. CoRR, abs/1901.11373.
ReClor: A reading comprehension dataset requiring logical reasoning. Weihao Yu, Zihang Jiang, Yanfei Dong, Jiashi Feng, International Conference on Learning Representations. Weihao Yu, Zihang Jiang, Yanfei Dong, and Jiashi Feng. 2020. ReClor: A reading comprehension dataset requiring logical reasoning. In International Conference on Learning Representations.
Local textual inference: Can it be defined or circumscribed?. Annie Zaenen, Lauri Karttunen, Richard Crouch, Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equivalence and Entailment. the ACL Workshop on Empirical Modeling of Semantic Equivalence and EntailmentAnn Arbor, MichiganAssociation for Computational LinguisticsAnnie Zaenen, Lauri Karttunen, and Richard Crouch. 2005. Local textual inference: Can it be defined or circumscribed? In Proceedings of the ACL Workshop on Empirical Modeling of Semantic Equiv- alence and Entailment, pages 31-36, Ann Arbor, Michigan. Association for Computational Linguis- tics.
From recognition to cognition: Visual commonsense reasoning. Rowan Zellers, Yonatan Bisk, Ali Farhadi, Yejin Choi, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. the IEEE Conference on Computer Vision and Pattern RecognitionRowan Zellers, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019a. From recognition to cognition: Vi- sual commonsense reasoning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 6720-6731.
SWAG: A large-scale adversarial dataset for grounded commonsense inference. Rowan Zellers, Yonatan Bisk, Roy Schwartz, Yejin Choi, Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing. the 2018 Conference on Empirical Methods in Natural Language ProcessingAssociation for Computational LinguisticsRowan Zellers, Yonatan Bisk, Roy Schwartz, and Yejin Choi. 2018. SWAG: A large-scale adversar- ial dataset for grounded commonsense inference. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 93- 104. Association for Computational Linguistics.
HellaSwag: Can a machine really finish your sentence?. Rowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, Yejin Choi, 10.18653/v1/P19-1472Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsRowan Zellers, Ari Holtzman, Yonatan Bisk, Ali Farhadi, and Yejin Choi. 2019b. HellaSwag: Can a machine really finish your sentence? In Pro- ceedings of the 57th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 4791- 4800, Florence, Italy. Association for Computational Linguistics.
ReCoRD: Bridging the gap between human and machine commonsense reading comprehension. Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, Benjamin Van Durme, ArXiv preprint 1810.12885Sheng Zhang, Xiaodong Liu, Jingjing Liu, Jianfeng Gao, Kevin Duh, and Benjamin Van Durme. 2018. ReCoRD: Bridging the gap between human and ma- chine commonsense reading comprehension. ArXiv preprint 1810.12885.
Situation models in language comprehension and memory. A Rolf, Zwaan, Gabriel A Radvansky, Psychological bulletin. 1232162Rolf A Zwaan and Gabriel A Radvansky. 1998. Situ- ation models in language comprehension and mem- ory. Psychological bulletin, 123(2):162.
A Machine Reading Comprehension Datasets Tables 3, 4, 5, and 6 list machine reading comprehension and related datasets along with their answer styles, dataset size, type of corpus, sourcing methods. and focusesA Machine Reading Comprehension Datasets Tables 3, 4, 5, and 6 list machine reading compre- hension and related datasets along with their an- swer styles, dataset size, type of corpus, sourcing methods, and focuses. |
248,779,938 | Automatically Discarding Straplines to Improve Data Quality for Abstractive News Summarization | Recent improvements in automatic news summarization fundamentally rely on large corpora of news articles and their summaries. These corpora are often constructed by scraping news websites, which results in including not only summaries but also other kinds of texts. Apart from more generic noise, we identify straplines as a form of text scraped from news websites that commonly turn out not to be summaries. The presence of these nonsummaries threatens the validity of scraped corpora as benchmarks for news summarization. We have annotated extracts from two news sources that form part of the Newsroom corpus (Grusky et al., 2018), labeling those which were straplines, those which were summaries, and those which were both. We present a rule-based strapline detection method that achieves good performance on a manually annotated test set 1 . Automatic evaluation indicates that removing straplines and noise from the training data of a news summarizer results in higher quality summaries, with improvements as high as 7 points ROUGE score. * Equal contribution 1 We release our code at https://github.com/nam ednil/straplines | [
226262184,
13752552,
52013710,
233189618,
1918428,
964287
] | Automatically Discarding Straplines to Improve Data Quality for Abstractive News Summarization
May 26, 2022
Amr Keleg
Institute for Language
Cognition and Computation
University of Edinburgh
Matthias Lindemann m.m.lindemann@sms.ed.ac.uk
Institute for Language
Cognition and Computation
University of Edinburgh
Danyang Liu
Institute for Language
Cognition and Computation
University of Edinburgh
Wanqiu Long
Institute for Language
Cognition and Computation
University of Edinburgh
Bonnie L Webber bonnie.webber@ed.ac.uk
Institute for Language
Cognition and Computation
University of Edinburgh
Automatically Discarding Straplines to Improve Data Quality for Abstractive News Summarization
May 26, 2022Proceedings of NLP Power! The First Workshop on Efficient Benchmarking in NLP, pages 42 -51
Recent improvements in automatic news summarization fundamentally rely on large corpora of news articles and their summaries. These corpora are often constructed by scraping news websites, which results in including not only summaries but also other kinds of texts. Apart from more generic noise, we identify straplines as a form of text scraped from news websites that commonly turn out not to be summaries. The presence of these nonsummaries threatens the validity of scraped corpora as benchmarks for news summarization. We have annotated extracts from two news sources that form part of the Newsroom corpus (Grusky et al., 2018), labeling those which were straplines, those which were summaries, and those which were both. We present a rule-based strapline detection method that achieves good performance on a manually annotated test set 1 . Automatic evaluation indicates that removing straplines and noise from the training data of a news summarizer results in higher quality summaries, with improvements as high as 7 points ROUGE score. * Equal contribution 1 We release our code at https://github.com/nam ednil/straplines
Introduction
Automatic text summarization is a challenging task. Recent progress has been driven by benchmarks that were collected by scraping a large collection of web-pages, including Gigaword (Rush et al., 2015), CNN/DailyMail (Nallapati et al., 2016), Newsroom (Grusky et al., 2018), and XSum (Narayan-Chen et al., 2019). Due to the way they are collected, these datasets contain a substantial portion of articles that are paired with texts that are not summaries. This flaw in data quality negatively impacts research in two ways: (i) models trained on these benchmarks tend to reproduce flaws in the data, making them less useful for summarization, and (ii) any evaluation against a reference text is meaningless if the reference is not actually a summary.
In this work, we present methods for improving the data quality in scraped news summarization corpora, focusing on the Newsroom benchmark (Grusky et al., 2018). We identify two main issues with the data quality: (i) noise in the extraction process (the wrong field being scraped, markup, ...), which was previously also identified to be an issue by Kryscinski et al. (2019), and (ii) straplines. According to the writing guidelines used by CERN 2 , "[t]he strap [line] gives added "teaser information not included in the headline, providing a succinct summary of the most important points of the article. It tells the reader what to expect, and invites them to find out more." Figure 1 shows an example of a strapline ("Don't expect to be riding one by 2020") below the regular headline. While the CERN guidelines emphasize the function of straplines to provide a summary, we find that most straplines in the Newsroom corpus are not summaries of their associated articles. Therefore, in order to obtain high quality data, it is necessary to distinguish a strapline aimed at piquing a reader's interest from an abstractive summary. To the best of our knowledge, no work has tried to distinguish straplines from summaries before, and even the word "strapline" does not appear in the ACL anthology in a research paper.
In our work, one pair of us designed a strapline annotation guideline through discussions and manual pre-annotations ( §3.1) and then annotated a development and test set for evaluating strapline classifiers. Based on the guideline, a separate pair created heuristics for a rule-based classifier that distinguishes straplines from summaries ( §3.2). We empirically verify the usefulness of these heuristics for strapline detection ( §4.2). Automatic evaluation indicates that removing straplines and noise from the training data with our heuristics results in higher quality summaries, with improvements as high as 7 points ROUGE score when compared to reference summaries ( §4.3).
Related work
Several works have analyzed existing summarization datasets from different aspects but none have identified straplines as an issue. Kryscinski et al. (2019) quantified HTML artifacts in two large scraped summarization datasets which are CNN/DM (Nallapati et al., 2016), and Newsroom (Grusky et al., 2018). They found that "summaries" containing such artifacts were found in ≈ 3.2% of the Newsroom data. They also argued that many of these artifacts could be detected using simple regular expressions and heuristics. Jung et al. (2019) define three sub-aspects of text summarization and analyze how different domains of summarization dataset are biased to these aspects. Bommasani and Cardie (2020) evaluate the quality of ten summarization datasets, and their results show that in most summarization datasets there are a sizable number of low quality examples and that their metrics can detect generically low quality examples. Tejaswin et al. (2021) analyzed 600 samples from three popular datasets, studying the data quality issues and varying degrees of sample complexity, and their analysis of summarization models demonstrate that performance is heavily dependent on the data and that better quality summarization datasets are necessary.
Given that research has shown that the training data of summarization models are noisy, researchers have proposed methods for training summarization models based on noisy data. For example, Kano et al. (2021) propose a model that can quantify noise to train summarization models from noisy data. The improvement of the models indicates that the noisy data has noticeable impacts for the training of the models.
Methodology
The Newsroom corpus contains articles from 38 news sources that vary in style and topics. News articles were scraped from HTML pages, where the page's title tag is parsed as the article's headline, while the page's body tag is parsed as the article's body. Since there was no consistent metadata tag indicating the summary of an article, Grusky et al. (2018) used different metadata tags to extract summaries. These tags are generally added to be used by social media platforms, and search engines. News publishers do not share a single format for organizing metadata. Nevertheless, all (or most) use the metadata label description, albeit for different things. Since the creators of Newsroom take as the summary of each article, the first tag in its metadata having the keyword description, this might be one reason that a strapline appears in the extract for an article in place of the real summary. Knowing that the "summaries" in the Newsroom corpus are of mixed quality, we call what Grusky et al. (2018) scraped from the web extracts, which may or may not be a genuine summary. Grusky et al. (2018) classify extracts according to how much text they repeat verbatim from the article into three categories: extractive (nearly everything appears verbatim in the article), abstractive (summarize in different words) and mixed.
We have focused on extracts classified as "abstractive". We have also limited our study to two of the 38 news sources -ones with different styles and covering different topics, specifically the New York Times (NYT) and time.com.
Annotation
The extracts in the Newsroom corpus do not all fall neatly into the categories straplines and summaries and noise; in particular, straplines and summaries are not mutually exclusive, and can be seen to form a continuum.
Even in this continuum, what one would definitively classify as a summary depends on multiple factors like its purpose and audience (Spärck Jones, 1999). Therefore, we only identify common characteristics of straplines and summaries, restricted to the context of news articles, such as those in the Newsroom corpus. Regarding purpose and audience, we generally assume the audience consists of people who read news on a somewhat regular basis, and that this is the same audience as for the summaries. The purpose is to provide a brief overview of the news of the day, and we assume this overview includes the headline. This means that the headline plays a central role in our annotation procedure. A practical implication of this is that annotation decisions can sometimes be made very swiftly without reading the actual article.
We identify the following main characteristics of straplines that we want to exclude (ordered by importance):
Clickbait A strapline can be designed to attract a reader's attention, rather than being informative.
Little or redundant information A strapline does not add much information to the headline.
General A strapline can make a very general statement, i.e. it would fit for a number of very different articles.
Comment A strapline can be a comment on the event described in the article. This does not apply if the article itself is an opinion piece.
Joke A strapline can be a joke.
Informal A strapline may use informal language.
An extract need not have all the stated properties to be considered a strapline. The characteristics are illustrated in Table 1.
The characteristics of summaries are partially complementary to those of straplines. Again, an extract need not have all the characteristics to be considered a summary:
Adds information A summary adds information to the headline.
Relevance A summary contains no irrelevant information and little background information.
Focus The summary of an article describing an event (entity) focuses on that event (entity).
Proposition A summary tends to be one or more propositions.
The following example illustrates that some extracts have characteristics of both a summary and a strapline: While the extract adds relevant information to the headline, it also uses a question to attract the reader's attention instead of giving away that "[...] Google and Twitter declined to comment on their support for an Internet blackout", as can be found in the main article.
Labels Because of this overlap in the categories, we annotate each article with one of the following labels: "summary", "strapline", "strapline and summary", "neither" and "paraphrase". We use the category "neither" for noise or when the headline or the extract are difficult to understand before reading the article. We sometimes observe that the extract is a close paraphrase of the headline. By definition, a paraphrase does not add information and therefore would not qualify as a summary. In another use case however, where we assume that a user does not have access to the headline, the extract may provide valuable information. In order to make our annotation more robust to this use case, we include the category of paraphrase, so that those extracts can be included or excluded accordingly.
Strapline detection pipeline
Before detecting straplines, we preprocess the data to exclude noisy extracts (e.g., extracts with HTML tags). Afterwards, the strapline detection method is used to split the remaining extracts into straplines and summaries. The following subsections describe the main heuristics used for noise filtration and strapline detection, with implementation details included in Appendix A. Kryscinski et al. (2019) mention that noisy samples represent about 3.2% of Newsroom, hinting that such samples can be detected with simple patterns, but without explicitly describing these patterns. Consequently, we start by looking for patterns of noise in the Newsroom dataset as a first preprocessing step, and identify five clear patterns of noise:
Noise filtration
Web formatting syntax An extract containing remnants of web formatting syntax. The formatting attributes are inconsistent and not sufficiently relevant for summarization.
Truncation An extract ending abruptly, forming an incomplete sentence. This might be attributed to the fact that news providers tend to have a truncated version of the summary that ended up being scraped in place of the long version of the summary.
Dateline An extract that is just a date, which is most probably the dateline field of an article instead of its summary.
Shortness An extract that is trivially short.
Non-English An extract that isn't written in English.
Strapline detection heuristics
As mentioned in §3.1, one can distinguish straplines from summaries based on the common features that characterize each of them. As a way to automatically detect a range of straplines in the dataset, we present the following set of six rulebased heuristics:
Beginning with imperative speech One way to capture the reader's attention is to start a strapline with an imperative to read the article ("Check out ...").
Strapline characteristics: Clickbait, Little or redundant information.
Having high quotes coverage A common feature of a strapline is to quote a statement said by a person that is mentioned in the corresponding article or a quote that is related to the article's topic. Using 1 st or 2 nd person pronouns Straplines may refer to the readers. This is done typically using 1 st and 2 nd person pronouns such as you and we.
Strapline characteristics: Clickbait, Joke, Informal.
Using question/exclamation marks Straplines are sometimes used to pose questions that stimulate the interest of the readers. On the contrary, summaries use objective sentences focusing on the main events of the articles, which makes it unlikely to find interrogative phrases in a summary. Using a repeated extract Journalists tend to use the same strapline for an article that is being published on a regular basis (e.g.: a daily/weekly column or a message to the editor section). Consequently, an article with a non-unique extract indicates that the extract is a general statement, making it a strapline. Using a clickbait Classifying an extract as a clickbait, as described in §4.2, can be employed to detect some of the extracts that are originally straplines.
Strapline characteristics: Clickbait.
Experiments
Annotation
Two annotators 3 annotated 50 articles each from the NYT and time.com sections of the test set of Newsroom. We performed two rounds, resulting in a total of 200 articles with double annotation. In order to provide a single ground truth for the test set, the two annotators discussed their annotations and agreed on a single label for each article. For tuning the strapline detection method, we further annotated 50 articles each from the development sets of NYT and time.com sections.
Results Table 2 shows how often the annotators chose a particular label for the different news sources. Proper summaries are the largest class for both news sources, but Time.com has a considerably higher proportion of undesired straplines, and also a higher proportion of extracts that are both summaries as well as straplines.
In order to see how reliable the extracts can be annotated, we compute inter-annotator agreement between the two annotators. Table 3 shows the results for two annotation rounds. We compute the agreement by splitting our annotation into two binary labels, namely straplines vs. non-straplines, and summaries vs. non-summaries, excluding paraphrases. We report the proportion of labels that are the same for both annotators ("Raw" in the table), and Cohen's κ (Cohen, 1960), which accounts for agreement that is expected by chance. The results in Table 3 show that the agreement is 3 The annotators are authors of this paper who were not involved in the development of the heuristics and the person responsible for the heuristics did not look at the annotations. high, but due to the class imbalance a sizable part of that high agreement might be due to chance (low κ value). However, the results show improvements in the consistency between the two annotators in the second round.
Strapline detection
Given the lack of annotated data for training a supervised strapline classification model, we implement a rule-based classifier by marking an extract as a strapline if any of the heuristics described in §3.2.2 apply to it. For the clickbait detector, we fine-tune the distilled BERT (Sanh et al., 2019) on the Webis-Clickbait-17 (Potthast et al., 2018) dataset and incorporate it into our strapline detector.
Results Table 4 shows the evaluation result of the strapline detector on the human annotated test set. We can observe that NYT test set is unbalanced where only 8 out of 100 samples are annotated as straplines, which also explains the difference between the accuracy and precision/recall. Time.com set is more balanced, and we can see that our model achieves a good performance with a precision of 68% and recall of 64%.
We apply the strapline detector on the training set to exclude the noisy samples and straplines. The result is shown in Table 5. We can observe that 20.07% samples of NYT and 37.61% of Time.com are classified as straplines, which shows that the strapline is an issue that cannot be ignored in the summarization dataset. Table 6: ROUGE-1, ROUGE-2, and ROUGE-L scores for the abstractive summarizer (T5-base version) trained on the dataset with and without the straplines. The best results are in bold.
Summarization with cleaner data
We employ the most popular pre-trained sequenceto-sequence model, T5 (Raffel et al., 2019), as the basic summarizer in our experiments. We exclude the noisy samples and straplines by our proposed strapline detector ( §4.2) from the NYT and Time.com dataset, forming a cleaner training set. We use T5-base and T5-large model in our experiments. We fine-tune them on the original and the cleaned dataset to see the influence of excluding noise and straplines. We use ROUGE (Lin, 2004a,b) to automatically evaluate the performance of the summarizers.
Results Table 6 shows the ROUGE-1, ROUGE-2, and ROUGE-L scores for the (T5-base) summarizer trained on the original training set and the cleaned training set 4 . We can observe that the impact of straplines on NYT is more significant than Time.com. For Time.com dataset, most ROUGE scores increase slightly by excluding the straplines. However, performance on NYT is greatly improved by up to 7 points. In part this is due to a repetition problem that we observe specifically on NYT: the model trained on the original data re-uses some summaries multiple times, with a single re-occurring sentence accounting for 10% of generated outputs whereas all summaries of the model trained on the cleaned data are unique. That is, the model seems to perpetuate the property of repeating extracts in the training data (see §3.2.2).
Case study For each news source, we manually compare the output of two T5-base models fine-tuned on the articles of the news source in the original dataset M original and the cleaned one M clean in order to investigate the effect of excluding noise and straplines from Newsroom. Table 7 demonstrates the differences between the gener-ated summaries by T5-base models that are finetuned on articles of each news source. The "Output of Original Model" M original column refers to the summaries generated by a model fine-tuned on the articles of Newsroom from the news source specified in the first column. On the other hand, the "Output of Cleaned Model" M clean column refers to the summaries generated by a model finetuned on the articles of Newsroom from the news source after discarding the articles whose extracts are flagged as noisy or as straplines. We found two main improvements in the quality of the generated summaries: (i) M clean tend to be more informative in compared to M original and (ii) M clean do not exhibit as much undesired characteristics of straplines like: using a repeated summary, using a question mark, and using the 1 st person pronouns, while M original tend to have such properties. The fact that these improvements do not have huge impact on the automatic evaluation metric (ROUGE) for Time.com implies that human evaluation is needed to accompany the automatic evaluation metrics in order to quantify such qualitative improvements.
Conclusion
We present methods for improving the data quality in scraped news summarization corpora, focusing on the New York Times and Time magazine sections of Newsroom (Grusky et al., 2018). We identify two main issues with the data quality that make Newsroom less appropriate as a summarization benchmark: (i) noise in the extraction process and (ii) presence of straplines in place of genuine summaries. After identifying common characteristics of straplines, we develop a set of effective heuristics for detecting straplines and noise.
Our work shows that when straplines and noisy data are excluded from the training data, the result- Kevin Conroys performances as Batman in the comic books, movies and television series stand out.
NYT New York Times reporters and editors are reporting from Washington, D.C.
A New Hampshire biologist turned to film school to learn how to communicate scientific information. NYT
Reading, watching, discussing and blogging the day's local, national, and international news at The New York Times.
The University of Illinois, Chicago, has a bright spot in its diversity.
NYT
To the Editor:. Readers respond to an Op-Ed article about climate talks. NYT
To the Editor:. Readers responded to a recent editorial about the dangers of concealed carry.
Time.com TIME 100 poll: Who is the world's most influential leader?
The Russian president has risen to second place in the TIME 100 poll, beating out world leaders like Pope Francis and Barack Obama Time.com California is cutting back on its water use, but where is it going?
California is cutting back on water usage by 25%, but the state isn't out of water ing summarizer produces better summaries based on comparison to reference texts. Although we found noise and straplines to be more prevalent in the Time magazine data, the impact of removing noise and straplines is bigger for the model trained on the NYT data, which avoids reusing the same summary multiple times. We plan to investigate this further in future work. Because of our focus on two specific news sources in Newsroom, we suspect that our heuristics might not work quite as well on other news sources having different styles, or on other datasets that were collected differently.
A Implementation details of noise filtration and strapline detection heuristics
Before applying the noise filtration and the strapline detection heuristics, Spacy's model (namely en_core_web_sm) (Honnibal and Montani, 2017) was used to tokenize the extracts, and determine the pos tags of the tokens.
A.1 Noise filtration
Web formatting syntax The following regular expressions <[a-zA-Z0-9_]+[/]?>, and [a-z]+=" were used to determine the presence of HTML tags and key/value pairs as part of the extract. The first one looks for opening HTML tags in the form <ALPHA_NUMERIC_SYBMBOL>, and closing HTML tags in the form <ALPHA_NUMERIC_SYBMBOL/>. The second regular expression looks for alphabetic symbols followed by an equal sign and a double quotation.
Truncation An extract is considered to be truncated if it ends with a comma or ends with a word whose part of speech (pos) tag is a determiner, a coordinating conjunction, a subordinating conjunction, or an unknown pos tag.
Dateline Since dates might have different formats, a python package called dateutil 5 was used to parse the extract. An extract is considered as a dateline if the package manages to parse it according to any of the package's formats for dates.
Shortness Extracts having three or less tokens (after excluding punctuation marks) are considered to be trivially short and thus removed from the dataset.
Non-English On looking at the unique characters of the Newsroom dataset, we noticed that it contains characters from other scripts such as: Arabic, and Chinese. Consequently, a python package called langdetect 6 which is ported from one of Google's projects (Shuyo, 2010) was used in order to filter-out articles that aren't written in English. The article's text was used instead of the extract to detect the language, since the langdetect package has higher precision when supplied with longer spans of text (i.e. when given the whole article text instead of just the extract). This implies that we are assuming that the language of the article's body and its extract will be the same, and that having a non-English body is enough to discard the article-extract pair from the dataset.
A.2 Strapline detection
Beginning with imperative speech If the pos tag of the first token in the extract is VB (base form of verb), then the extract is considered to be beginning with an imperative.
Having high quotes coverage A simple pattern matching function is used to compute the percentage of the tokens found between quotes in the extract. An extract is considered as a strapline if its quotes coverage is higher than a preset threshold (a hyperparameter set to 0.35 based on manual investigations of the dataset).
Using 1 st or 2 nd person pronouns If any of the extracts' tokens is part of the following list (i, me, mine, myself, we, our, ours, ourselves, you, your, yours, yourself, yourselves), then it's said to use a 1 st or 2 nd person pronouns.
Using question/exclamation marks The presence of a question or an exclamation mark is used to simplify the detection of interrogative/ exclamation phrases. (Ester et al., 1996) on top of sparse term frequency vectors representing the extracts achieves better performance at the expense of running time. Therefore, we opted to use the simple method of having exact matches as a method to detect repeated extracts.
B Hyperparameters in the experiments
Clickbait Detector We fine-tune distilled BERT using AdamW optimizer (Loshchilov and Hutter, 2018), the early stopping mechanism with patience of 5, a batch size of 128, and a learning rate of 10 −4 . The max input length is set to 512.
T5-based Summarizer
The max length of input and output are set to 512 and 128, respectively. We fine-tune T5 using AdamW optimizer (Loshchilov and Hutter, 2018), the early stopping mechanism with patience of 5, a batch size of 32, and a learning rate of 10 −4 .
C Results of fine-tuning T5-large
Looking at the ROUGE scores in Table 1, one can notice that similar trends are achieved on finetuning a T5-large summarizer to these found on fine-tuning a T5-base summarizer (as discussed in the main paper). While T5-large achieves higher absolute ROUGE scores, the effect of removing noise, and straplines from the training corpus is nearly the same for both the T5-base, and the T5large models, which demonstrates that more attention needs to be given to the quality of the dataset rather than using larger models.
D Distribution of Heuristics
Figure 1 :
1A strapline ("Don't expect ...") that is mistaken for a summary in the Newsroom corpus.
Jan. 18 Internet Blackout to Protest SOPA: Reddit Says Yes Following speculation, Reddit has confirmed plans to go dark on Jan. 18 to protest the Stop Online Piracy Act. Wikipedia may follow suit, but what about Google, Facebook and other big-name tech companies?
Bill OReilly: More trouble overseas for President Obama and AmericaThe OReilly Factor on FoxNews.com with Bill OReilly, Weeknights at 8 PM and 11 PM ESTHeadline
Extract
Characteristic
Heuristic
Awesome! Interactive Internet health map
checks your states connection
Check to see if you're part of a bigger
problem
Clickbait
Imperative,
pronouns
Sochi Olympics: USA Canada Hockey
Game Sparks "Loser Keeps Bieber" Ad
USA! USA! USA!
Little informa-
tion
Too
short,
exclamation
mark
General state-
ment
Repeated ex-
tract
Sofia Vergara and fiance split, read (and
love) the charming statement
At least we know Sofia is probably writ-
ing this herself!
Comment
Pronouns
£Quieres seguir viendo noticias en Face-
book? Aquí te decimos qué hacer
Facebook cambió su algoritmo para pri-
orizar [...]
N/A
Non-English
article
Table 1 :
1Examples of straplines from the Newsroom along with a salient characteristic and the relevant automatic heuristics for strapline detection.
Table 2 :
2Distribution of extract annotations among labels on the annotated portion of the test set. Annota-tions were collected for 100 random samples from each
source (NYT, and Time.com) resulting in a total of 200
annotated samples.
Round
Straplines
Summaries
Raw
κ
Raw
κ
1
0.70 0.36 0.72 0.37
2
0.82 0.55 0.80 0.49
Table 3 :
3Inter-annotator agreement for strapline and summary annotations.
Table 4 :
4Results of the rule-based strapline classification as a binary classification problem (Strapline/ Not Strapline).Source
Noise
Strapline
Total
NYT
Training Set 899 (1.89%) 9,537 (20.07%) 47,529
Test Set
101 (2.00%) 1,002 (19.86%) 5,045
Time.com
Training Set 937 (4.35%) 8,102 (37.61%) 21,541
Test Set
108 (4.60%) 893 (38.03%)
2,348
Table 5 :
5Number and % of noise and straplines our rulebased heuristics detected in NYT or Time.com data sections of Newsroom.
News source Output of Original Model (M original )Output of Cleaned Model (M clean )
NYT
A day in the life of a Olympic athlete.
The Australian swimmer Mack Horton was
booed by Chinese swimmers after his victory in
the 200-meter freestyle, and Russian swimmer
Irina Efimova was booed.
NYT
A day in the life of a Yankees fan.
The Yankees victory parade on Friday was a cel-
ebration of the teamâs success, but not everyone
was there.
NYT
A New York Times blog about comic book pub-
lishing and design.
Table 7 :
7Example summaries selected from the outputs of the model fine-tuned on the original dataset and the cleaned dataset. Spans showing characteristics of straplines are underlined and shown in bold text.
5 https://dateutil.readthedocs.io/en/s table/ 6 https://pypi.org/project/langdetect/ Using a repeated extract If an extract is repeated more than once in the training dataset then it's discarded. Using a clustering method such as Density-Based Spatial Clustering of Applications with Noise (DBSCAN)
Table 2 Table 1 :
21shows the distribution within the NYT and Time.com datasets, including both noisy samples and straplines. Note that there might be overlap between different heuristics. /o straplines 21.80 5.30 17.19 23.43 6.03 18.34 Time.com original 16.47 3.44 13.64 19.36 4.32 15.82 w/o straplines 16.07 3.38 13.43 19.28 4.46 15.96 Combined original 20.19 5.50 16.41 21.54 5.25 17.05 w/o straplines 19.61 4.60 15.79 22.07 5.55 17.60 ROUGE-1, ROUGE-2, and ROUGE-L scores for the abstractive summarizer (T5-large version) trained on the dataset with and without the straplines. The best results are in bold. Heuristic NYT Time.com Training Set Test Set Training Set Test Set50
Table 2 :
2The distribution of the heuristics (both noises and straplines) within the datasets.51
https://writing-guidelines.web.cern. ch/entries/strapline-strap.html
The corresponding scores for the T5-large summarizer are reported inTable 1in the Appendix.
AcknowledgmentsThis work was supported in part by the UKRI Centre for Doctoral Training in Natural Language Processing, funded by the UKRI (grant EP/S022481/1) and the University of Edinburgh, School of Informatics and School of Philosophy, Psychology & Language Sciences.
Intrinsic evaluation of summarization datasets. Rishi Bommasani, Claire Cardie, EMNLP. Rishi Bommasani and Claire Cardie. 2020. Intrinsic evaluation of summarization datasets. In EMNLP.
A coefficient of agreement for nominal scales. Educational and psychological measurement. Jacob Cohen, 20Jacob Cohen. 1960. A coefficient of agreement for nominal scales. Educational and psychological mea- surement, 20(1):37-46.
A density-based algorithm for discovering clusters in large spatial databases with noise. Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, kdd. 96Martin Ester, Hans-Peter Kriegel, Jörg Sander, Xiaowei Xu, et al. 1996. A density-based algorithm for discovering clusters in large spatial databases with noise. In kdd, volume 96, pages 226-231.
Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. Max Grusky, Mor Naaman, Yoav Artzi, 10.18653/v1/N18-1065Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers; New Orleans, Louisiana1Association for Computational LinguisticsMax Grusky, Mor Naaman, and Yoav Artzi. 2018. Newsroom: A dataset of 1.3 million summaries with diverse extractive strategies. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, Volume 1 (Long Pa- pers), pages 708-719, New Orleans, Louisiana. As- sociation for Computational Linguistics.
2017. spaCy 2: Natural language understanding with Bloom embeddings, convolutional neural networks and incremental parsing. Matthew Honnibal, Ines Montani, To appearMatthew Honnibal and Ines Montani. 2017. spaCy 2: Natural language understanding with Bloom embed- dings, convolutional neural networks and incremen- tal parsing. To appear.
Taehee Jung, Dongyeop Kang, Lucas Mentch, Eduard H Hovy, Earlier isnt always better: Subaspect analysis on corpus and system biases in summarization. ArXiv, abs/1908.11723Taehee Jung, Dongyeop Kang, Lucas Mentch, and Ed- uard H. Hovy. 2019. Earlier isnt always better: Sub- aspect analysis on corpus and system biases in sum- marization. ArXiv, abs/1908.11723.
Quantifying appropriateness of summarization data for curriculum learning. Ryuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, Tomoko Ohkuma, 10.18653/v1/2021.eacl-main.119Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume. the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main VolumeOnline. Association for Computational LinguisticsRyuji Kano, Takumi Takahashi, Toru Nishino, Motoki Taniguchi, Tomoki Taniguchi, and Tomoko Ohkuma. 2021. Quantifying appropriateness of summariza- tion data for curriculum learning. In Proceedings of the 16th Conference of the European Chapter of the Association for Computational Linguistics: Main Volume, pages 1395-1405, Online. Association for Computational Linguistics.
Neural text summarization: A critical evaluation. Wojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc-Cann, Caiming Xiong, Richard Socher, 10.18653/v1/D19-1051Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsWojciech Kryscinski, Nitish Shirish Keskar, Bryan Mc- Cann, Caiming Xiong, and Richard Socher. 2019. Neural text summarization: A critical evaluation. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 540- 551, Hong Kong, China. Association for Computa- tional Linguistics.
Looking for a few good metrics: Automatic summarization evaluation-how many samples are enough? In NTCIR. Chin-Yew Lin, Chin-Yew Lin. 2004a. Looking for a few good metrics: Automatic summarization evaluation-how many samples are enough? In NTCIR.
Rouge: A package for automatic evaluation of summaries. Chin-Yew Lin, Text summarization branches out. Chin-Yew Lin. 2004b. Rouge: A package for auto- matic evaluation of summaries. In Text summariza- tion branches out, pages 74-81.
Fixing weight decay regularization in adam. Ilya Loshchilov, Frank Hutter, Ilya Loshchilov and Frank Hutter. 2018. Fixing weight decay regularization in adam.
Abstractive text summarization using sequence-to-sequence RNNs and beyond. Ramesh Nallapati, Bowen Zhou, Çaglar Cicero Dos Santos, Bing Gulçehre, Xiang, 10.18653/v1/K16-1028Proceedings of The 20th SIGNLL Conference on Computational Natural Language Learning. The 20th SIGNLL Conference on Computational Natural Language LearningBerlin, GermanyAssociation for Computational LinguisticsRamesh Nallapati, Bowen Zhou, Cicero dos Santos, Çaglar Gulçehre, and Bing Xiang. 2016. Abstrac- tive text summarization using sequence-to-sequence RNNs and beyond. In Proceedings of The 20th SIGNLL Conference on Computational Natural Lan- guage Learning, pages 280-290, Berlin, Germany. Association for Computational Linguistics.
Collaborative dialogue in Minecraft. Anjali Narayan-Chen, Prashant Jayannavar, Julia Hockenmaier, 10.18653/v1/P19-1537Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics. the 57th Annual Meeting of the Association for Computational LinguisticsFlorence, ItalyAssociation for Computational LinguisticsAnjali Narayan-Chen, Prashant Jayannavar, and Ju- lia Hockenmaier. 2019. Collaborative dialogue in Minecraft. In Proceedings of the 57th Annual Meet- ing of the Association for Computational Linguistics, pages 5405-5415, Florence, Italy. Association for Computational Linguistics.
Crowdsourcing a large corpus of clickbait on twitter. Martin Potthast, Tim Gollub, Kristof Komlossy, Sebastian Schuster, Matti Wiegmann, Erika Patricia Garces Fernandez, Matthias Hagen, Benno Stein, Proceedings of the 27th international conference on computational linguistics. the 27th international conference on computational linguisticsMartin Potthast, Tim Gollub, Kristof Komlossy, Se- bastian Schuster, Matti Wiegmann, Erika Patri- cia Garces Fernandez, Matthias Hagen, and Benno Stein. 2018. Crowdsourcing a large corpus of click- bait on twitter. In Proceedings of the 27th inter- national conference on computational linguistics, pages 1498-1507.
Exploring the limits of transfer learning with a unified text-to-text transformer. Colin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, Peter J Liu, arXiv:1910.10683arXiv preprintColin Raffel, Noam Shazeer, Adam Roberts, Katherine Lee, Sharan Narang, Michael Matena, Yanqi Zhou, Wei Li, and Peter J Liu. 2019. Exploring the limits of transfer learning with a unified text-to-text trans- former. arXiv preprint arXiv:1910.10683.
A neural attention model for abstractive sentence summarization. Alexander M Rush, Sumit Chopra, Jason Weston, 10.18653/v1/D15-1044Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalAssociation for Computational LinguisticsAlexander M. Rush, Sumit Chopra, and Jason Weston. 2015. A neural attention model for abstractive sen- tence summarization. In Proceedings of the 2015 Conference on Empirical Methods in Natural Lan- guage Processing, pages 379-389, Lisbon, Portugal. Association for Computational Linguistics.
Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, abs/1910.01108ArXiv. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.
Language detection library for java. Nakatani Shuyo, Nakatani Shuyo. 2010. Language detection library for java.
Automatic summarising: factors and directions. Karen Spärck, Jones , Advances in automatic text summarisation. MIT PressKaren Spärck Jones. 1999. Automatic summarising: factors and directions. In Advances in automatic text summarisation. MIT Press.
How well do you know your summarization datasets? In FINDINGS. Priyam Tejaswin, Dhruv Naik, Pengfei Liu, Priyam Tejaswin, Dhruv Naik, and Pengfei Liu. 2021. How well do you know your summarization datasets? In FINDINGS. |
227,230,445 | [] | Statistical Parsing of Tree Wrapping Grammars
OnlineCopyright OnlineDecember 8-13, 2020
Tatiana Bladier bladier@phil.hhu.de
Heinrich Heine University of Düsseldorf Universitätsstraße 1
40225DüsseldorfGermany
Jakub Waszczuk waszczuk@phil.hhu.de
Heinrich Heine University of Düsseldorf Universitätsstraße 1
40225DüsseldorfGermany
Laura Kallmeyer kallmeyer@phil.hhu.de
Heinrich Heine University of Düsseldorf Universitätsstraße 1
40225DüsseldorfGermany
Statistical Parsing of Tree Wrapping Grammars
Proceedings of the 28th International Conference on Computational Linguistics
the 28th International Conference on Computational LinguisticsBarcelona, SpainOnlineDecember 8-13, 20206759
We describe an approach to statistical parsing with Tree-Wrapping Grammars (TWG). TWG is a tree-rewriting formalism which includes the tree-combination operations of substitution, sisteradjunction and tree-wrapping substitution. TWGs can be extracted from constituency treebanks and aim at representing long distance dependencies (LDDs) in a linguistically adequate way. We present a parsing algorithm for TWGs based on neural supertagging and A * parsing. We extract a TWG for English from the treebanks for Role and Reference Grammar and discuss first parsing results with this grammar.Long distance dependencies and wrapping substitution in TWGTWGs consist of elementary trees which can be combined using the operations a) substitution (replacing a leaf node with a tree), b) sister adjunction (adding a new daughter to an internal node) and c) tree-wrapping substitution (adding a tree with a d(ominance)-edge by substituting the lower part of the d-edge for a leaf node and merging the upper node of the d-edge with the root of the target tree, seeFig. 1). The latter is This work is licensed under a Creative Commons Attribution 4.0 International License. License details
Introduction
We present a statistical parsing approach for Tree-Wrapping Grammar (TWG) (Kallmeyer et al., 2013). TWG is a grammar formalism closely related to Tree-Adjoining Grammar (TAG) (Joshi and Schabes, 1997), which was originally developed with regard to the formalization of the typologically oriented Role and Reference Grammar (RRG) (Van Valin and LaPolla, 1997;Van Valin Jr, 2005). TWG allows for, among others, a more linguistically adequate representation of long distance dependencies (LDDs) in sentences, such as topicalization or long distance wh-movement. In the present paper we show a grammar extraction algorithm for TWG, propose a TWG parser, and discuss parsing results for the grammar extracted from the RRG treebanks RRGbank and RRGparbank 1 (Bladier et al., 2018).
Similarly to TAG, TWG has the elementary tree combination operations of substitution and sisteradjunction. Additionally, TWG includes the operation of tree-wrapping substitution, which accounts for preserving the connection between the parts of the discontinuous constituents. Operations similar to tree-wrapping substitution were proposed by (Rambow et al., 1995) as subsertion in D-Tree Grammars (DTG) and by (Rambow et al., 2001) as generalized substitution in D-Tree substitution grammar (DSG). To our best knowledge, no statistical parsing approach was proposed for DTG or DSG. An approach to symbolic parsing for TWGs with edge features was proposed in (Arps et al., 2019). In this work, we propose a statistical parsing approach for TWG and extend the pipeline based on supertagging and A * algorithm (Waszczuk, 2017; originally developed for TAG to be applied to TWG.
The contributions of the paper are the following: 1) We present the first approach to statistical parsing for Tree-Wrapping Grammars. 2) We propose an extraction algorithm for TWGs based on the algorithm developed for TAG by (Xia, 1999). 3) We extend and modify the neural A TAG-parser (Waszczuk, 2017;Kasai et al., 2018; to handle the operation of tree-wrapping substitution. used to capture long distance dependencies (LDDs), see the wh-movement in Fig. 1. Here, the left tree with the d-edge (depicted as a dashed edge) gets split; the lower part fills a substitution slot while the upper part merges with the root of the target tree. TWG is more powerful than TAG (Kallmeyer, 2016). The reason is that a) TWG allows for more than one wrapping substitution stretching across specific nodes in the derived tree and b) the two target nodes of a wrapping substitution (the substitution node and the root node) need not come from the same elementary tree, which makes wrapping non-local compared to adjunction in TAG. ; Figure 1: Tree-wrapping substitution for the sentence "What do you think you remember" with longdistance wh-movement.
CLAUSE
Linguistic phenomena leading to LDD differ across languages. Among LDDs in English are some cases of extraction of a phrase to a non-canonical position with respect to its head, which is typically fronting in English (Candito and Seddah, 2012). We identified the following LDD variants in our data which can be captured with tree-wrapping substitution: long-distance relativization, long-distance wh-movement, and long-distance topicalization, which we discuss in Section 6. 2 LDD cases are rather rare in the data, which is partly due to the RRG analysis of operators such as modals, which do not embed CORE constituents (in contrast to, for example, the analyses in the Penn Treebank). Only 0,11 % of tokens in our experiment data (including punctuation) are dislocated from their canonical position in sentence to form an LDD. This number is on a par with 0,16 % of tokens reported by (Candito and Seddah, 2012) for French data.
Statistical Parsing with TWGs
The proposed A TWG parser 3 is a direct extension of the simpler A TAG parser described in (Waszczuk, 2017). The parser is specified in terms of weighted deduction rules (Shieber et al., 1995;Nederhof, 2003) and can be also seen as a weighted variant of the symbolic TWG parser (Arps et al., 2019). As in , both TWG elementary trees (supertags) and dependency links are weighted, a schema also used in A CCG parsing (Yoshikawa et al., 2017). These weights come directly from a neural supertagger and dependency parser, similar to the one proposed by (Kasai et al., 2018). Parsing consists then in finding a best-weight derivation among the derivations that can be constructed based on the deduction rules for a given sentence. The supertagger takes on input a sequence of word embeddings 4 (x i ) n i=1 , to which a 2-layer BiLSTM transducer is applied to provide the contextualized word representations (h i ) n i=1 , common to all subsequent tasks: POS tagging, TWG supertagging, and dependency parsing. On top of that, we apply two additional 2-layer BiLSTM transducers in order to obtain the supertag-and dependency-specific word representations:
(h (sup) 1 , . . . , h (sup) n ) = BiLSTM s (h 1 , . . . , h n ) (1) (h (dep) 1 , . . . , h (dep) n ) = BiLSTM d (h 1 , . . . , h n )(2)
The supertag-specific representations are used to predict both supertags and POS tags (POS tagging is a purely auxiliary task, since POS tags are fully determined by the supertags):
Pr(sup(i)) = softmax(Linear s (h (sup) i )) (3) Pr(pos(i)) = softmax(Linear p (h (sup) i ))(4)
Finally, the dependency parsing component is based on biaffine scoring (Dozat and Manning, 2017), in which the head and dependent representations are obtained by applying two feed-forward networks to the dependency-specific word representations,
hd i = FF hd (h (dep) i
) and dp i = FF dp (h
(dep) i
). The score of word j becoming the head of word i is then defined as:
φ(i, j) = dp T i M hd j + b T hd j ,(5)
where M is a matrix and b is a bias vector. 5 Extending the TAG parser to TWG involved adapting the weighted deduction rules to handle wrapping substitution as well as updating the corresponding implementation with appropriate index structures to speed up querying the chart. The A heuristic is practically unchanged and it is both admissible (by construction) and monotonic (checked at run time), which guarantees that the first derivation found by the parser is the one with the best weight. The scheme of our parsing architecture is shown in Fig. 2. In Appendix A we provide details on modifications we have applied to the A parser to handle the tree-wrapping substitution.
TWG extraction
To extract a TWG from RRGbank and RRGparbank, we adapt the top-down grammar extraction algorithm developed by (Xia, 1999) for TAG. While inital and sister-adjoining trees can be extracted following this algorithm, we added a new procedure to extract d-edge trees for wrapping substitution operation. Extraction of initial and sister-adjoining elementary trees requires manually defined percolation tables for marking head and modifier nodes. In order to extract d-edge elementary trees for LDDs, dependent constituents need to be marked prior to TWG extraction. In RRGbank and RRGparbank the constituents belonging to LDDs are indicated with features PRED-ID and NUC-ID and an index. These indicated parts alongside with the mother node are extracted to form a single tree with a dominance link (d-edge) (see for instance the elementary tree for "What to say" in Fig. 3). The remaining nodes plus the duplicated mother node and a substitution slot form the target tree, for example the tree for "I'm trying" in Fig. 3. Please find a more detailed formal description of our extraction algorithm along with a link to the percolation tables in Appendix B. We have taken the gold and silver data from RRGbank and the English part of RRGparbank 6 . The data is split in a train and a test set. We have taken 10 % of the sentences from the train set to create a dev. set. Thus, our train, dev. and test sets include 4960, 551, and 2145 trees, respectively. There are 46 constituents with LDDs in the train set, 5 in the dev. set and 27 in the test set. We extracted a TWG from this data and present in Table 1 statistics on the elementary tree templates (supertags) in the TWG. We compare the parsing results with the parser DiscoDOP (van Cranenburgh and Bod, 2013) which is based on the discontinuous data-oriented parsing model. We also compare our results with the stateof-the-art transition-based parser Discoparset (Coavoux and Cohen, 2019). We evaluated 7 the overall performance of the parsers and also analyzed how well all three systems predict LDDs (see Tables 2 and 3). Unrelated to LDDs, the treebanks contain crossing branches (e.g., for operators and modifiers). Prior to TWG extraction, we decross these while keeping track of the transformation in order to be able to reverse it. For parsing with DiscoDOP and Discoparset, we added crossing branches for all LDDs. To evaluate LDD prediction with DiscoDOP and Discoparset we counted how many crossing branches were established in parsed trees. For ParTAGe we counted the LDD predictions as correct whenever the predicted supertags and dependencies indicated that the long distance element would be substituted to the elementary tree of the corresponding predicate. We counted partially correct LDDs in both parsing architectures as correctly predicted as long as the connection between the predicate and the fronted element was predicted. (Coavoux and Cohen, 2019). In case of Discoparset, the numbers in subscript represent the relative gain provided by BERT (Devlin et al., 2019) used in neither DiscoDOP nor ParTAGe experiments.
Error analysis for LDD prediction
We evaluated the performance of our parsing architecture with regard to the labeled F1-score and we also focused on prediction of the LDDs (see Tables 2 and 3 predicted the LDDs in the test data more accurately than the compared parsers. Please note that LDDs are generally rare in the corpus data and that we also had only about 5000 sentences in the training data. Some mistakes resulted from the wrong prediction of a POS tag which 6 Gold annotated data means that data were annotated and approved by at least two annotators of RRGbank or RRGparbank and silver data means an annotation by one linguist. 7 We use the evaluation parameters distributed together with DiscoDOP for constituency parsing evaluation. Our evaluation file is available at https://github.com/TaniaBladier/Statistical_TWG_Parsing/blob/main/experiments/eval.prm. leads to the parser confusing an LDD constituent with a construction without LDD. For example, in (1), the word "is" should have POS tag V, but the parsing system erroneously labels it as AUX (= auxiliary) and thus interprets the wh-element as a predicate. In order to check our assumption about POS tags as a source of error, we have run an experiment in which we presented the parser with gold POS tags. Although this additional information helped to rule out the LDD errors in (1), restriction of the available supertags introduced new errors in LDD predictions (see Table 3) and also was the reason why some sentences could not be parsed (as shown in Table 2).
(1) a. What is one to think of all this? (is tagged AUX instead of V) b. [. . . ] which he told her to place on her tongue (which tagged CLM instead of PRO-REL)
In some cases where the relative or wh phrase of the LDD is an adjunct, as in (2), the parser incorrectly attaches it higher, taking it to be a modifier of the embedding verb.
(2)
And why do you imagine that we bring people to this place?
Cases where the embedding verb also has a strong tendency to take a wh-element as argument sometimes get parsed incorrectly: In (3), which is analysed as an argument of said.
(3)
[. . . ] slip of paper which they said was the bill
Conclusions and Outlook
We have presented a statistical parsing algorithm for parsing Tree-Wrapping Grammar -a grammar formalism inspired by TAG which aims at linguistically better representations of long distance dependencies. The LDDs in TWG are represented in a single elementary tree called d-edge tree which is combined with the target tree using tree-wrapping substitution. This operation allows to simultaneously put both parts of a discontinuous constituent to the corresponding slots of the target tree. We have extracted a TWG for English from two RRG treebanks and have compared our parsing experiments with the parser DiscoDOP based on the DOP parsing model and with the transition-based parser Discoparset. We have evaluated our parser on prediction of LDDs and could achieve more accurate results than the compared parsers. In our future work we plan to explore TWG extraction and parsing for different languages, since the linguistic phenomena leading to LDDs vary across the languages. In particular, we have already started to work on extraction of TWGs for German and French. We plan to apply our TWG extraction and parsing algorithm to other constituency treebanks, for example French Treebank (Abeillé et al., 2003). We also plan to implement a slightly extended version of tree wrapping substitution which would allow to place the parts of discontinuous constituents in various slots between the nodes of the target tree. Our TWG parser is specified in terms of weighted deduction rules (Shieber et al., 1995;Nederhof, 2003). Each deduction rule (see Table 4) takes the form of a set of antecedent items, presented above the horizontal line, from which the consequent item (below the horizontal line) can be deduced, provided that the corresponding conditions (on the right) are satisfied. The specification of the TWG parser consists of 8 deduction rules which constitute a blend of the TAG parser with the symbolic TWG parser (Arps et al., 2019). Here, we assume familiarity with both these parsers and limit ourselves to explaining the features specific to the statistical TWG parser.
Weights. A pair (w, m) is assigned to each chart item via deduction rules, where w is the inside weight, i.e., the weight of the inside derivation, and m is a map assigning weights to the individual gaps in the corresponding gap list Γ. Since each gap in Γ can be uniquely identified by its starting position, we use the starting positions as keys in m. The need to use a map (dictionary) data structure instead of a single scalar value, as in the TAG parser, stems from the CW rule (complete wrapping), in which the calculation of the resulting weight map requires removing the weight corresponding to the gap (f 1 , f 2 , y).
We use ∅ to denote an empty map, m[x ⇒ y] to denote m with y assigned to x, m[x ⇒ ⊥] to denote m with x removed from the set of keys (together with the corresponding value), and sum(m) to denote the sum of values (weights) in the map m. We also re-use the concatenation operator ⊕ to represent map union. Whenever map union is used (m 1 ⊕ m 2 ), the sets of keys of the two map arguments (m 1 and m 2 ) are guaranteed to be disjoint (an invariant which can be proved by induction over the deduction rules).
Heuristic. Given a chart item η = (x, i, j, Γ) with the corresponding weights (w, m), the TWG A heuristic (which provides a lower-bound estimate on the cost of parsing the remaining part of the sentence) is a straightforward generalization of the TAG A heuristic used by . In particular, it accounts for the total minimal cost of scanning each word outside the span (i, j), as well as the words remaining in the gaps in Γ. Thus, in constrast with the TAG heuristic, since there can be many gaps in Γ, the sum of the weights in the map m has to be accounted for.
Figure 2 :
2Pipeline of our neural statistical TWG parsing architecture.
Figure 3 :
3Extraction of a target tree and an elementary tree with a dominance edge (marked with dotted line). The nodes with PRED-ID and NUC-ID in the left tree identify the components of the LDD.
m) : (N →α•,i,j,Γ) (w,m) :(N,i,j,Γ,ws?) ws?=yes ⇐⇒ dnode(N ) CS:(w 1 ,m 1 ) : (N →α•M β,i,j,Γ 1 ) (w 2 ,m 2 ) : (M,j,k,Γ 2 ,no) (w 1 +w 2 ,m 1 ⊕m 2 ) : (N →αM •β,i,k,Γ 1 ⊕Γ 2 ) SU: (w 1 ,m 1 ) : (N →α•M β,i,j,Γ 1 ) (w 2 ,m 2 ) : (R,j,k,Γ 2 ,no) (w 1 +w 2 +ω(R,N ),m 1 ⊕m 2 ) : (N →αM •β,i,k,Γ 1 ⊕Γ 2 ) leaf (M ) root(R) ∧ ¬sister (R) (M )= (R) SA: (w 1 ,m 1 ) : (N →α•β,i,j,Γ 1 ) (w 2 ,m2 ) : (M,j,k,Γ 2 ,no) (w 1 +w 2 +ω(M,N ),m 1 ⊕m 2 ) : (N →α•β,i,k,Γ 1 ⊕Γ 2 ) (M )= (N ) ∧ sister (M ) ¬sister (N ) PW: (w 1 ,m 1 ) : (N →α•M β,i,j,Γ 1 ) (w 2 ,m 2 ) : (D,j,k,Γ 2 ,yes) (w 1 ,m 1 [j⇒w 2 +sum(m 2 )+A(D)]) : (N →αM •β,i,k,Γ 1 ⊕[(j,k, (D))]) leaf (M ) (M )= (D) CW: (w 1 ,m 1 ) : (R,i,j,Γ 1 ⊕[(f 1 ,f 2 ,y)]⊕Γ 2 ,ws?) (w 2 ,m 2 ) : (D,f 1 ,f 2 ,Γ 3 ,yes) (w 1 +w 2 +ω(R,D),m 1 [f 1 ⇒⊥]⊕m 2 ) : (D,i,j,Γ 1 ⊕Γ 3 ⊕Γ 2 ,no) root(R) ∧ y= (D) (parent(D))= (R) ¬sister (R)
Table 2 :
2Parsing results compared with DiscoDOP (van Cranenburgh et al., 2016) and Discoparset
). The results show that ParTAGePredicted LDDs
DiscoDOP
Discoparset
ParTAGe
test
test
test
test
(gold POS)
# true positives
13
14
22
18
# false positives
7
0
0
0
# false negatives
14
13
5
9
Table 3 :
3Prediction of LDDs on test data.
Table 4 :
4Weighted deduction rules of the TWG parser
Another potential LDD cases in English are it-clefts (for example "It was the uncertainty that Mr Lorin feared"). Although we have not found this LDD variant in our data, our parsing method will work for these cases as well.3 The parser, the TWG extraction code and the recipes to reproduce the experiments described in this paper are available at https://github.com/TaniaBladier/Statistical_TWG_Parsing.4 In our experiments (see Sec. 5), we used fastText(Bojanowski et al., 2016) to obtain the word vector representation.
The head representation hd0 of the dummy root node is a parameter in this architecture.
A node v1 is left to another node v2 if the leftmost leaf dominated by v1 is left of the leftmost leaf dominated by v2. 9 Please find the code for our TWG extraction algorithm along with the percolation tables for head and modifier distinction in this repository: https://github.com/TaniaBladier/Statistical_TWG_Parsing.
AcknowledgementsWe thank three anonymous reviewers for their insightful comments, as well as Rainer Osswald and Robin Möllemann for their help with collecting the experimental data and fruitful discussions. This work was carried out as a part of the research project TREEGRASP (treegrasp.phil.hhu.de) funded by a Consolidator Grant of the European Research Council (ERC).Appendix B. TWG extraction algorithm 1. Decross tree branches. First, for local discontinuous constituents (for instance NUCs consisting of a verb and a particle in German), we split the constituent into two components (e.g., NUC1 and NUC2), both attached to the mother of the original discontinuous node.Second, if a tree τ still has crossing branches, the tree is traversed top-down from left to right and among its subtrees those trees are identified whose root labels contain one of the following strings: OP-, -PERI, -TNS, CDP, or VOC. For each such subtree γ in question with r being its root, we choose the highest node v below the next left 8 sibling of r such that the rightmost leaf dominated by v immediately precedes the leftmost leaf dominated by r. If r and v are not yet siblings, γ is reattached to the parent of v. If the subtree in question has no left siblings, it is reattached to the right in a corresponding way. After this step, it should be checked if the tree τ still contains crossing branches. If yes, the process of decrossing branches is continued by applying the steps above to the next subtree in question.2. Extract LDDs. Then we traverse each tree τ in a top-down left-to-right fashion and check for each subtree of τ whether it contains the following special markings for LDDs in its root label: PREDID=, NUCID= or REF=. The indexes identify the parts of the NLD which belong together. In case of an LDD, the parts of the minimal subtree which contain both parts of the LDD are extracted within a single tree with a d-edge (see the multicomponent NUC and CORE inFigure 3). The substitution site and the mother node are added to the remaining subtree in order to mark the nodes on which the wrapping substitution takes place (seeFigure 3).After this step, an empty agenda is created and the extracted tree chunks and the pruned tree τ with the remaining nodes are placed into the agenda.3. Extract initial and sister-adjoining trees. If no agenda with tree chunks was created in the previous step, an empty agenda is created in this step and the entire tree τ is placed into it. Each tree chunk in the agenda is traversed and the percolation tables 9 are used to decide for each subtree τ 1 . . . τ n in the tree chunk whether it is a head, a complement or a modifier with respect to its parent. Initial trees for identified complements and sister-adjoining trees for identified modifiers are extracted recursively in the top-down fashion until each elementary tree has exactly one anchor site.Initial trees are extracted as follows: If a node of a subtree is identified as a complement, it is removed from the parent tree and the parent node is marked as a substitution slot. In order to extract sister-adjoining trees for identified modifier subtrees, the parent node of the subtree is copied and added as the new root node of the elementary tree with a special marking * on the root label.
Building a treebank for french. Anne Abeillé, Lionel Clément, François Toussenel, Treebanks. SpringerAnne Abeillé, Lionel Clément, and François Toussenel. 2003. Building a treebank for french. In Treebanks, pages 165-187. Springer.
Chart-based RRG parsing for automatically extracted and hand-crafted RRG grammars. University at Buffalo, Role and Reference Grammar RRG Conference. David Arps, Tatiana Bladier, Laura Kallmeyer, David Arps, Tatiana Bladier, and Laura Kallmeyer. 2019. Chart-based RRG parsing for automatically extracted and hand-crafted RRG grammars. University at Buffalo, Role and Reference Grammar RRG Conference 2019.
RRGbank: a Role and Reference Grammar Corpus of Syntactic Structures Extracted from the Penn Treebank. Tatiana Bladier, Andreas Van Cranenburgh, Kilian Evang, Laura Kallmeyer, Proceedings of the 17th International Workshop on Treebanks and Linguistic Theories (TLT 2018). the 17th International Workshop on Treebanks and Linguistic Theories (TLT 2018)NorwayLinköping University Electronic Press155Oslo UniversityRobin Möllemann, and Rainer OsswaldTatiana Bladier, Andreas van Cranenburgh, Kilian Evang, Laura Kallmeyer, Robin Möllemann, and Rainer Oss- wald. 2018. RRGbank: a Role and Reference Grammar Corpus of Syntactic Structures Extracted from the Penn Treebank. In Proceedings of the 17th International Workshop on Treebanks and Linguistic Theories (TLT 2018), December 13-14, 2018, Oslo University, Norway, number 155, pages 5-16. Linköping University Electronic Press.
From partial neural graph-based LTAG parsing towards full parsing. Tatiana Bladier, Jakub Waszczuk, Laura Kallmeyer, Jörg Janke, Computational Linguistics in the Netherlands Journal. 9Tatiana Bladier, Jakub Waszczuk, Laura Kallmeyer, and Jörg Janke. 2019. From partial neural graph-based LTAG parsing towards full parsing. Computational Linguistics in the Netherlands Journal, 9:3-26, Dec.
Enriching word vectors with subword information. Piotr Bojanowski, Edouard Grave, Armand Joulin, Tomas Mikolov, arXiv:1607.04606arXiv preprintPiotr Bojanowski, Edouard Grave, Armand Joulin, and Tomas Mikolov. 2016. Enriching word vectors with subword information. arXiv preprint arXiv:1607.04606.
Effectively long-distance dependencies in French: Annotation and parsing evaluation. Marie Candito, Djamé Seddah, Marie Candito and Djamé Seddah. 2012. Effectively long-distance dependencies in French: Annotation and parsing evaluation.
Discontinuous constituency parsing with a stack-free transition system and a dynamic oracle. Maximin Coavoux, Shay B Cohen, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Maximin Coavoux and Shay B. Cohen. 2019. Discontinuous constituency parsing with a stack-free transition system and a dynamic oracle. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 204-217, Minneapolis, Minnesota, June. Association for Computational Linguistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaAssociation for Computational Linguistics1Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirec- tional transformers for language understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota, June. Association for Computational Linguistics.
Deep biaffine attention for neural dependency parsing. Timothy Dozat, Christopher D Manning, Timothy Dozat and Christopher D. Manning. 2017. Deep biaffine attention for neural dependency parsing.
Tree-adjoining grammars. K Aravind, Yves Joshi, Schabes, Handbook of formal languages. SpringerAravind K Joshi and Yves Schabes. 1997. Tree-adjoining grammars. In Handbook of formal languages, pages 69-123. Springer.
Tree Wrapping for Role and Reference Grammar. Laura Kallmeyer, Rainer Osswald, Robert D Van Valin, Jr , LNCS. G. Morrill and M.-J. Nederhof8036SpringerLaura Kallmeyer, Rainer Osswald, and Robert D. Van Valin, Jr. 2013. Tree Wrapping for Role and Reference Grammar. In G. Morrill and M.-J. Nederhof, editors, Formal Grammar 2012/2013, volume 8036 of LNCS, pages 175-190. Springer.
On the mild context-sensitivity of k-Tree Wrapping Grammar. Laura Kallmeyer, Formal Grammar: 20th and 21st International Conferences, FG 2015. Annie Foret, Glyn Morrill, Reinhard Muskens, Rainer Osswald, and Sylvain PogodallaBarcelona, Spain; Italy; BerlinSpringerProceedings, number 9804 in Lecture Notes in Computer ScienceLaura Kallmeyer. 2016. On the mild context-sensitivity of k-Tree Wrapping Grammar. In Annie Foret, Glyn Morrill, Reinhard Muskens, Rainer Osswald, and Sylvain Pogodalla, editors, Formal Grammar: 20th and 21st International Conferences, FG 2015, Barcelona, Spain, August 2015, Revised Selected Papers. FG 2016, Italy, August 2016, Proceedings, number 9804 in Lecture Notes in Computer Science, pages 77-93, Berlin. Springer.
End-to-end graph-based TAG parsing with neural networks. Jungo Kasai, Robert Frank, Pauli Xu, William Merrill, Owen Rambow, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LouisianaAssociation for Computational Linguistics1Jungo Kasai, Robert Frank, Pauli Xu, William Merrill, and Owen Rambow. 2018. End-to-end graph-based TAG parsing with neural networks. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1181-1194, New Orleans, Louisiana, June. Association for Computational Linguistics.
Squibs and Discussions: Weighted Deductive Parsing and Knuth's Algorithm. - Mark, Computational Linguistics. 129Mark-Jan Nederhof. 2003. Squibs and Discussions: Weighted Deductive Parsing and Knuth's Algorithm. Com- putational Linguistics, 29(1).
D-Tree Grammars. Owen Rambow, K Vijay-Shanker, David Weir, Proceedings of ACL. ACLOwen Rambow, K. Vijay-Shanker, and David Weir. 1995. D-Tree Grammars. In Proceedings of ACL.
D-tree substitution grammars. Owen Rambow, David Vijay-Shanker, Weir, Computational Linguistics. 271Owen Rambow, K Vijay-Shanker, and David Weir. 2001. D-tree substitution grammars. Computational Linguis- tics, 27(1):87-121.
Principles and implementation of deductive parsing. M Stuart, Yves Shieber, Fernando Cn Schabes, Pereira, The Journal of logic programming. 241Stuart M Shieber, Yves Schabes, and Fernando CN Pereira. 1995. Principles and implementation of deductive parsing. The Journal of logic programming, 24(1):3-36.
Discontinuous parsing with an efficient and accurate dop model. Andreas Van Cranenburgh, Rens Bod, Proceedings of the International Conference on Parsing Technologies (IWPT 2013). the International Conference on Parsing Technologies (IWPT 2013)CiteseerAndreas van Cranenburgh and Rens Bod. 2013. Discontinuous parsing with an efficient and accurate dop model. In Proceedings of the International Conference on Parsing Technologies (IWPT 2013). Citeseer.
Data-oriented parsing with discontinuous constituents and function tags. Andreas Van Cranenburgh, Remko Scha, Rens Bod, Journal of Language Modelling. 41Andreas van Cranenburgh, Remko Scha, and Rens Bod. 2016. Data-oriented parsing with discontinuous con- stituents and function tags. Journal of Language Modelling, 4(1):57-111.
Syntax: Structure, meaning and function. Robert D Van Valin, Jr , Randy Lapolla, Cambridge University PressRobert D. Van Valin, Jr. and Randy LaPolla. 1997. Syntax: Structure, meaning and function. Cambridge Univer- sity Press.
Exploring the syntax-semantics interface. Robert D Van ValinJr, Cambridge University PressRobert D. Van Valin Jr. 2005. Exploring the syntax-semantics interface. Cambridge University Press.
Leveraging MWEs in practical TAG parsing: towards the best of the two worlds. Jakub Waszczuk, Ph.D. thesisJakub Waszczuk. 2017. Leveraging MWEs in practical TAG parsing: towards the best of the two worlds. Ph.D. thesis.
Extracting tree adjoining grammars from bracketed corpora. Fei Xia, Proceedings of the 5th Natural Language Processing Pacific Rim Symposium (NLPRS-99). the 5th Natural Language Processing Pacific Rim Symposium (NLPRS-99)Fei Xia. 1999. Extracting tree adjoining grammars from bracketed corpora. In Proceedings of the 5th Natural Language Processing Pacific Rim Symposium (NLPRS-99), pages 398-403.
A* CCG parsing with a supertag and dependency factored model. Masashi Yoshikawa, Hiroshi Noji, Yuji Matsumoto, abs/1704.06936CoRRMasashi Yoshikawa, Hiroshi Noji, and Yuji Matsumoto. 2017. A* CCG parsing with a supertag and dependency factored model. CoRR, abs/1704.06936. |
||
15,853,069 | Collaboratively Annotating Multilingual Parallel Corpora in the Biomedical Domain-some MANTRAs | The coverage of multilingual biomedical resources is high for the English language, yet sparse for non-English languages-an observation which holds for seemingly well-resourced, yet still dramatically low-resourced ones such as Spanish, French or German but even more so for really under-resourced ones such as Dutch. We here present experimental results for automatically annotating parallel corpora and simultaneously acquiring new biomedical terminology for these under-resourced non-English languages on the basis of two types of language resources, namely parallel corpora (i.e. full translation equivalents at the document unit level) and (admittedly deficient) multilingual biomedical terminologies, with English as their anchor language. We automatically annotate these parallel corpora with biomedical named entities by an ensemble of named entity taggers and harmonize non-identical annotations the outcome of which is a so-called silver standard corpus. We conclude with an empirical assessment of this approach to automatically identify both known and new terms in multilingual corpora. | [
9892921
] | Collaboratively Annotating Multilingual Parallel Corpora in the Biomedical Domain-some MANTRAs
Johannes Hellrich johannes.hellrich@uni-jena.de
Jena University Language & Information Engineering (JULIE) Lab Friedrich-Schiller-Universität Jena
JenaGermany
Simon Clematide
Institute for Computational Linguistics
University of Zürich
ZürichSwitzerland
Udo Hahn udo.hahn@uni-jena.de
Jena University Language & Information Engineering (JULIE) Lab Friedrich-Schiller-Universität Jena
JenaGermany
Dietrich Rebholz-Schuhmann
Institute for Computational Linguistics
University of Zürich
ZürichSwitzerland
Collaboratively Annotating Multilingual Parallel Corpora in the Biomedical Domain-some MANTRAs
Named Entity RecognitionMultilingual TerminologiesSilver Standard Corpus
The coverage of multilingual biomedical resources is high for the English language, yet sparse for non-English languages-an observation which holds for seemingly well-resourced, yet still dramatically low-resourced ones such as Spanish, French or German but even more so for really under-resourced ones such as Dutch. We here present experimental results for automatically annotating parallel corpora and simultaneously acquiring new biomedical terminology for these under-resourced non-English languages on the basis of two types of language resources, namely parallel corpora (i.e. full translation equivalents at the document unit level) and (admittedly deficient) multilingual biomedical terminologies, with English as their anchor language. We automatically annotate these parallel corpora with biomedical named entities by an ensemble of named entity taggers and harmonize non-identical annotations the outcome of which is a so-called silver standard corpus. We conclude with an empirical assessment of this approach to automatically identify both known and new terms in multilingual corpora.
Introduction
Biomedical terminologies assemble a huge amount of semantic metadata descriptors which span the whole range of conceptualizations relevant for the life sciences. They have shown their versatile usefulness and great importance in many application scenarios-ranging from biological database curation in molecular biology, e.g. gene/protein annotation (Camon et al., 2004), to clinical disease encoding (Spackman and Campbell, 1998) and patient record management (Campbell et al., 1997). Despite the reasonable claim that terminologies should be designed in a language-independent way, in reality, they all rely on verbalizations in a specific natural language. Actually, the vast majority of these terminological systems are phrased in English. This can be beneficial e.g. for terminological homogenization, when sciences converge on an internationally shared lingua franca such as English for molecular biology. But clearly for hospitals, health insurance companies and (mostly non-expert) patients the medical sublanguage will always remain their own nation's native language-in the English-speaking as well as the non-English-speaking countries. Hence, there is an enormous need for interlingual communication beyond the limits of the English language within Europe and also worldwide. There is, however, a striking lack of balance in the linguistic coverage of biomedical terminologies. Whereas English is very well covered in most of the relevant thematic areas in the life sciences, even otherwise well-resourced languages, such as Spanish, German or French, fall short of acceptable proportions of coverage in those areas, with loss rates of 60-90% (compared with the English coverage). Even worse, the wide range of definitely under-resourced languages (European ones such as Czech, Dutch, Turkish, Swedish or Polish and also many Non-European ones such as Hindi, Thai, Bengal, etc.) and, furthermore, the remaining low-and non-resourced languages (such as Bulgarian, Greek, etc.) have coverage loss rates between 95% to 99%, some of them even have no coverage at all (e.g. Croatian, Maltese, Latvian) for the life sciences. In essence, this means that the health care system of these countries is severely decoupled not only from the English-speaking biomedical community, and thus the much warranted interoperability of medical data (e.g. required in an age of increasing cross-border mobility of people and goods) is clearly out of sight. That is the reason for massive investments into multilingual biomedical terminological resources. The classical approach-manual terminology development-is not only resource-costly in terms of time and money but obviously doomed to failure since the coverage loss data have not changed much for decades so that the terminology gaps have not been closed despite the necessity of such resources. Also due to the conceptual dynamics in the life sciences this situation is likely to get worse rather than get better in the future. The MANTRA project 1 targets this scenario in that its main goal is the automatic enhancement of biomedical terminology resources for some selected non-English European languages. Starting from a massively trimmed version of the Unified Medical Language System (UMLS), 2 one of the most authoritative broad-band collections of terminology resources for the life sciences, and its English verbalizations of terms, in the MANTRA project methodological procedures are under development which help increase the more than limited coverage of Spanish, French, German and Dutch language terms within the UMLS.
The key idea is here to exploit three kinds of parallel corpora which contain sets of manually supplied pairwise direct translations of documents-titles from biomedical journal articles, drug product descriptions and claim sections from biomedical patents -for different kinds of lexical processing to generate translation equivalents from these sources. To gather results for a wide array of approaches the MANTRA project organized the CLEF-ER challenge competition 3 within the framework of CLEF (Conference and Labs of the Evaluation Forum) 2013. 4 Participants were asked to provide biomedical entity annotations, grounded in a stripped down version of the current UMLS, for the parallel corpora. A multitude of approaches, ranging from dictionary-based term extraction over named entity recognition to phrasal alignment within statistical machine translation, was used by the participants. The major methodological challenge for us was to harmonize the in-coming proposals for named entities and concepts-we defined a character-based metric which computes the term-wise overlap between all annotation contributions (Lewin et al., 2012;Lewin and Clematide, 2013). Our work resulted in an entirely new type of language resource: a set of parallel corpora in English, French, Spanish, German and Dutch, all annotated for biomedical terms of a large variety. We call this outcome a silver standard corpus (SSC) (see also our previous work on an Englishonly annotated corpus within the CALBC project (Rebholz-Schuhmann et al., 2010;Rebholz-Schuhmann et al., 2011)), since, unlike human-developed gold standards, this collection of semantic metadata has automatically evolved on the basis of an ensemble of entity taggers. In the following, we will describe the resources required and procedures crucial for the construction of the silver standard (Section 2.), as well as the annotations contained in the SSC, both for known and new terms (Section 3.).
Multilingual Language Resources
The preparation work for the CLEF-ER challenge comprised the compilation of the parallel corpora and the multilingual terminological resources.
Multilingual Parallel Texts
Our parallel corpora which contain manually translated text units were compiled from three publicly available document repositories. They were chosen in order to increase the diversity of text genres and phrasings. The MEDLINE collection 5 contains bilingual titles from biomedical journal articles, which can be searched via PUBMED. 6 The multilingual EMEA documents (Tiedemann, 2009) provide consumer-oriented information on the usage of drugs. 7 The multilingual patent claims from the European Patent Office 8 focus on the technical and legal aspects of biomedi- We expected all three text genres to be highly parallel regarding their semantic content. The translations of patent texts and EMEA drug labels should reflect the original content for legal or regulatory reasons. However, in the case of EMEA, we detected a substantial amount of non-parallelism in the original EMEA text collection due to imperfect conversion from PDF to text. Using a filtering approach based on the number of characters in potentially parallel text units, we had to remove about 243k units of the 364k original EMEA units that we started working with from (Tiedemann, 2009). Medline titles were partly translated into English by the original authors of the article, partly they were translated by third parties. Non-ASCII characters such as accented vowels in French or Spanish as well as German umlauts were not well represented in the original MEDLINE data. Therefore, we used a technique based on character n-grams to reconstruct the original orthography of the non-English MEDLINE titles as much as possible.
Multilingual Biomedical Terminology
The shared multilingual terminological resource (MTR) 9 (Rebholz-Schuhmann et al., 2013a) for the identification of novel terms from the parallel corpora has been derived from the Unified Medical Language System (UMLS) Metathesaurus (Bodenreider, 2004). The UMLS Metathesaurus incorporates over 100 biomedical terminologies, from which we selected the Medical Subject Headings (MESH), the Medical Dictionary for Regulatory Activities Terminology (MEDDRA, (Brown et al., 1999)) and the Systematized Nomenclature Of Medicine Clinical Terms (SNOMED-CT, (Stearns et al., 2001)).
In UMLS, terms are organized in synsets that are identified by a conceptual fix point, the so-called Concept Unique Identifier (CUI). Each concept (or CUI) may have multiple names per language, these are called synonyms which also cover the different translations of a term. CUIs are categorized into 15 broader semantic groups. We certainly did not want to provide the entire terminology, since it contains sets of terms that are either not relevant for the annotation of concepts in the biomedical literature or were deemed too problematic for the identification of multilingual biomedical terms. For example, the terms in the UMLS semantic group "Concepts & Ideas" (CONC) denote common English entities and concepts such as "contract" or "contract agreement" with less or low relevance for the annotation and translation of specific biomedical terminologies.
In order to choose the relevant semantic groups for inclusion in our MTR, all English corpora have been annotated with the full biomedical terminology and then all those semantic groups have been removed from the terminological resource that either contributed only a very small number of annotations (e.g., terms linked to genes), or that generated very unspecific annotations according to the manual inspection.
For the CLEF-ER challenge, the semantic groups "Activities and Behaviors" (ACTI), "Anatomy" (ANAT), "Chemicals and Drugs" (CHEM), "Devices" (DEVI), "Disorders" (DISO), "Geographic Areas" (GEOG), "Living Beings" (LIVB), "Objects" (OBJC), "Phenomena" (PHEN), and "Physiology" (PHYS) were kept. The MTR contains 531,466 concepts with 2,839,277 synonyms. Inter-entity character counts and centroid: a n d v i s c e r a l a d i p o s e t i s s u e i s 0 0 0 2 2 2 2 2 2 2 2 5 5 5 5 5 5 4 4 4 4 4 4 0 0
Extended centroids with varying boundary thresholds:
Boundary Thresholds E-Centroid 1 or 2 visceral adipose tissue 3 or 4 adipose tissue 5 adipose Figure 1: Individual annotations, their centroids and extended centroids
CLEF-ER Challenge for Semantically
Annotating Multilingual Corpora In order to enrich the non-English part of our MTR with new synonyms and/or new translations, we followed a collaborative, corpus-based approach, the so-called CLEF-ER challenge. The objective of the challenge was the identification of mentions of named entities and biomedical concepts in multilingual biomedical corpora, including the attribution of CUIs from our MTR to these mentions.
Input Resources for the Challenge
The participants of the CLEF-ER challenge received the following input data from the organizers. First, the MTR in the OBO exchange format. 10 Second, the unannotated non-English parallel corpora. Third, the automatically annotated and harmonized English Silver Standard Corpus (SSC). The creation of the English SSC for CLEF-ER and its properties are described in detail by Lewin and Clematide (2013). There are several reasons why an English SSC is useful for the enhancement of multilingual terminological resources. First, expert annotations for a broad-coverage gold standard annotation are costly and time-consuming and do not scale up to large corpora. Second, the coverage of English terminology resources and the performance of biomedical named entity taggers for English allow for an automatic annotation in a quality that alleviates the need of a gold standard. Third, an even more satisfactory level of automatic named entity annotation can be reached if the output of several systems is harmonized into an ensemble annotation, the so-called harmonized SSC. The harmonization avoids the inevitable biases and errors of any individual annotation solution. For the alignment and harmonization of the output of several different entity taggers, we applied and adapted the centroid approach originally described in (Lewin et al., 2012). Figure 1 illustrates the character-based centroid harmonization. Each annotation adds one vote to the interentity pairs of adjacent characters (spaces are ignored). If a pre-determined voting threshold is reached, the span with Table 3: Distribution of the challenge contributions (A1-7) for the non-English SSCs. Some contributors provided more than one annotation run for a corpus but only one run was selected for the SSC in order to prevent harmonization biases.
the highest number of votes is considered the centroid. The boundary distribution of a centroid is given by the character offsets to the left and right of the centroid where the number of votes changes. The value of a boundary is the difference in number of votes. Although centroids and their boundary distributions are maximally informative, they could have been too complex and discouraging for the challenge participants. Therefore, we decided to transform the centroids into a classical markup format with single boundaries. In general, the boundaries of centroids cannot be taken as adequate mention boundaries for the enhancement of a terminology, because they represent only the shared core of an ensemble annotation. In order to include more lexical content, we decided to extend the centroids (e-centroids) to the left and right according to a pre-determined boundary threshold.
For the English SSC, 6 different annotations were available from the MANTRA project partners. A voting threshold of 3 and a boundary threshold of 2 was finally chosen. This setting kept 45% of all possible concept centroids (voting threshold 1). On average, 19% (standard deviation 14%) of the original annotations were removed. 97.8% of the partner annotations that went into the SSC had exactly the same boundaries as their e-centroids.
Exploiting the Challenge Outcome
Each challenge participant had to deliver at least one annotated non-English corpus. In total, seven annotation solutions were submitted to the challenge (Rebholz-Schuhmann et al., 2013b). Almost all contributing solutions exploited publicly available resources (UMLS, WordNet, Wikipedia), and -in addition -applied lexical lookup solutions or indexing of the terminological resources. Two groups translated the terms through public resources (i.e. BabelNet, Google Translate), and four systems made use of statistical machine translation methods or multilingual word alignment. Altogether, the used solutions showed high heterogeneity. Table 3 shows the distribution of annotation contributions across languages and corpora. Unfortunately, only two system annotated Dutch corpora which is the reason that we excluded this language for the terminology enhancement evaluations described below. The challenge contributions were evaluated in two different ways. Evaluation A measured the annotations of an individual contribution against a non-English SSC built from all contributions on the level of mentions. Evaluation B compared the bag of CUIs in one unit against the bag of CUIs annotated in the unit of the parallel English SSC. The exploitation of the challenge outcomes for the enhancement of the non-English terminology relies on non-English SSCs that were harmonized from the challenge contributions. However, for the purpose of terminology enhancement we are more interested in the subset of annotations that cannot be trivially linked to already existing entries in our provided MTR. Therefore, we produced a partially deannotated version of the challenge contributions where we removed such annotations. This material was then used to create deannotated SSCs according to two different voting threshold schemas. The majority voting schema requires a threshold of V := N/2 + N mod 2 where N is the number of contributions for a given corpus. The fixed threshold voting schema requires a minimal amount of votes. For our non-English corpora, a voting threshold of 2 was set.
Results
We performed both quantitative and selected analyses of the annotations in the SSC, investigating effects of harmonization methods, corpus types, corpus sizes and languages (focusing on German, French and Spanish).
Number of Annotations
We counted for each class the number of concepts (i.e. CUIs), terms and term occurrences, and calculated the ratios thereof, as well as counts normalized for corpus size. Findings have been normalized by removing diacritics and non-letter characters, and transforming them to a lowercase representation. Tables 5 and 6 in the appendix provide an overview on the number of annotations contained in the SSCs generated with threshold voting and majority voting, respectively. For our analysis we distinguish three annotation classes:
• known, i.e. the UMLS contains the annotated text as a term for the concept and language in question.
• entirely new, i.e. the UMLS does not contain the annotated text as a term for the concept, neither for English, nor for the language in question.
• new, as English, i.e. the UMLS does not contain the annotated text as a term for the concept and language in question, yet contains it for English-many of these terms are of Latin origin, e.g. the name of the fungus Cephalosporium acremonium.
Comparing harmonization methods: The largest portion of annotations by all metrics results from the new class if using threshold harmonization, yet for majority harmonization the known class dominates, except for Spanish concepts and terms. The overall numbers for the known class are comparable for both harmonization methods, threshold harmonization producing slightly higher numbers (about 10 percent). In contrast, numbers for the classes as English and new are far lower for the more conservative majority voting. This difference is especially dramatic for the new class, with the majority harmonized SSC containing only about half as many concepts, a sixth of the terms and a quarter of the occurrences present in the threshold harmonized SSC. This is also reflected in the ratio of terms/concept, being very similar for the known (about 1.3) and as English classes (about 1.1) over all languages and corpora. In contrast, results for the new class depend strongly on the harmonization method used-majority voting results in numbers around 1.2, whereas threshold voting results in ratios of 3.5 to 4.2. The ratio of occurrences/concept for the new class is also diverging based on the harmonization method, majority harmonization resulting in about half the value provided by threshold harmonization. In general, majority harmonization seems to result in new annotations behaving similar to those of the known or as English class, while threshold harmonization new annotations behave atypically, having both far more terms and occurrences per concept.
Comparing corpora: The German EMEA corpus and the French PATENT corpus provided surprisingly few new concepts and terms relative to their number of new occurrences, independently of the harmonization method being used; the inverse is true for Spanish MEDLINE (cf. the occurrences/concept column of Tables 5 and 6). MEDLINE is the dominant source of annotations in all three classes, probably due to its high corpus size, broad thematic spectrum and the annotation-friendly simple syntactic structure of the titles.
Regarding languages: Spanish has, independently of the harmonization method being used, about three times the known and twice the new concepts and terms per thousand words as other languages, thus its absolute number of annotated concepts and terms is comparable to those of German and French, despite its combined corpora having only 5M words, whereas German and French have 13M and 15M, respectively. The absolute number of known concepts and terms is similar for French and Spanish, while German is about 10 percent lower, again independently of the harmonization method used. As English terms and concepts are more frequent in German, especially for majority harmonization or MEDLINE titles, which could be caused by a greater openness to English loan words or more reliance on Greek and Latin medical terms.
Overall, the analysis of the SSC annotations leads to two questions: Why are there so many more as English synonyms in German than in other languages and is threshold voting too lax or is the abnormal number of terms and occurrences in the new class an accurate reflection of the corpora? While the latter question can only be answered by the creation of and evaluation against a GSC, the former can be answered by sampling the annotations of the as English class.
Breakdown of as English annotations
To better understand the occurrence of as English annotations in non-English texts and the comparatively high number of as English terms and concepts in German texts we sampled 100 randomly selected terms each for German, Spanish and French from the threshold harmonized SSC.
We suspected internationally used Latin and Greek loanwords to be the main reason for the appearance of as English terms in general and a greater openness to English loanwords as the reason for the abnormally high rate in German corpora. We found the following explanations for as English terms occurring in non-English texts:
• Latin or Greek terms used internationally, e.g. "decubitus"; used only for terms which are inflected according to the original language and not for compounds or words formed by derivation with non-Latin/Greek material.
• Names of drugs, chemical compounds, persons or places, e.g. "Valoron".
• English words used internationally, e.g. "suspension".
• Abbreviation used internationally, e.g. "PCP" for pneumonia.
• Other, e.g. random similarity like "perimeters" which could be both an English plural or a German genitive of the Greek loanword.
German behaved according to our expectation, with Latin/Greek words making up the majority of as English terms, whereas those made up only a minority of the as English terms for French and Spanish (cf. Table 4). French as English terms are quite often French terms which are missing in the terminology, yet appear, due to diacritics being removed during normalization, to be English terms. Spanish as English terms are most often real English terms. Some of these cases are due to wrong language identification in the EMEA corpus, e.g. the following sentence being listed as Spanish: "Dogs Treatment of pain". Overall no clear explanation for the differences in the frequency of as English terms could be found, and surprisingly German seems to be much more open to Latin and Greek terms than the two Romance languages. A possible explanation are differences in the existing terminologies, e.g. the Spanish terminology already containing many Latin/Greek terms leading to few new ones being found, yet further investigating the etymology of UMLS entries is out of scope for this paper.
Conclusions
The exploitation of parallel SSCs for the generation of multilingual terminological resources is a new approach which enables normalization of the term candidates against an existing terminological resource. Future work will include the creation of a small GSC, allowing us to assess the quality of the SSC, and to refine our harmonization process to find a good balance between the number and quality of new terms. We also plan to use the multilingual annotations to enrich the underlying terminological resource with new non-English entries and assess the impact of an enhanced terminology on other applications, e.g. machine translation. The SSCs described in this paper will be made publicly available in Summer 2014 via ELRA. contained in the UMLS as a term for this language, new i.e. not contained in the UMLS as a term, neither for this language nor English and as English i.e. not contained in the UMLS as a term for this language, yet for English-mostly Latin terms. We list for each combination of class, corpus and language the number of concepts, terms and occurrences annotated in the SSC, both in absolute numbers and normalized per thousand words. We also list the ratios of terms and occurrences per concept.
Acknowledgements
Table 1 :
1Unit and word counts per language in all corpora.
The MEDLINE titles are strictly bilingual, German/English
and French/English titles are more frequent than Span-
ish/English. The multilingual EMEA corpus covers all lan-
guages. The patent claims are multilingual, however, they
do not cover Spanish and Dutch. Patent units are whole
paragraphs from the patent claims, all other units are seg-
ments of the size of sentences.
cal information. With the exception of Spanish and Dutch
for patent claims, we were able to compile parallel docu-
ments from all three text genres mentioned above. Table 1
gives the basic statistics of the text units for each text genre.
The available data from MEDLINE is not evenly distributed
across the different languages, especially Dutch and to a
lesser degree Spanish are not well represented there.
MEDLINE titles have an average length of about 8 to 10
words per unit. EMEA units (sentence-like segments) are a
bit longer on average: 15 to 20 words per unit. Patent claim
units are whole paragraphs (often in the form of a bullet list
containing several sentences).
Table 2
2shows a detailed breakdown of the multilingual coverage of the MTR. Some of the resources already have a very high coverage in one or more non-English languages. For instance, SNOMED-CT in Spanish, or MEDDRA in German, French and Spanish. However, for MESH all non-English languages are strongly under-resourced.Terms
MESH SNOMED-CT MEDDRA
en
764,000
1,184,005
56,061
de
77,249
-
50,128
fr
105,758
-
49,586
es
59,678
1,089,723
49,499
nl
40,808
-
-
Table 2 :
2Multilingual terminological resource: The English part of the TR contains most terms. Only Spanish is covered in SNOMED-CT. MEDDRA terms have been translated in all languages.Individual annotations:...and visceral adipose tissue is... ...and visceral adipose tissue is... ...and visceral adipose tissue is... ...and visceral adipose tissue is... ...and visceral adipose tissue is...
Table 4 :
4This table lists the frequency of explanations for
the occurrence of as English terms in German, French and
Spanish texts, based on a sample of 100 terms each. We
distinguish the following explanations: Latin/Greek term
used internationally, name (of e.g. a drug), English word
used internationally, abbreviation used internationally and
other (e.g. random similarity).
Table 5 :
5This table gives an overview on the identification of terms in the non-English SSCs harmonized by threshold voting, distinguishing three classes of annotations: known i.e. contained in the UMLS as a term for this language, new i.e. not contained in the UMLS as a term, neither for this language nor English and as English i.e. not contained in the UMLS as a term for this language, yet for English-mostly Latin terms. We list for each combination of class, corpus and language the number of concepts, terms and occurrences annotated in the SSC, both in absolute numbers and normalized per thousand words. We also list the ratios of terms and occurrences per concept.ClassConcepts Conc./1k words Terms Terms/1k words Occurrences Occ./1k words Terms/Conc. Occ./Conc.Corpus
Language
EMEA
de
known
4,513
2.15
5,408
2.58
123,076
58.61
1.20
27.27
new
1,907
0.91
2,059
0.98
87,975
41.89
1.08
46.13
as English
1,130
0.54
1,165
0.55
46,344
22.07
1.03
41.01
es
known
6,307
2.52
7,800
3.12
138,243
55.21
1.24
21.92
new
3,887
1.55
4,680
1.87
174,920
69.86
1.20
45.00
as English
1,022
0.41
1,066
0.43
34,455
13.76
1.04
33.71
fr
known
5,275
2.03
6,673
2.57
147,460
56.76
1.27
27.95
new
5,409
2.08
6,448
2.48
184,789
71.13
1.19
34.16
as English
1,397
0.54
1,465
0.56
57,459
22.12
1.05
41.13
Medline
de
known
15,874
2.65 20,066
3.35
448,442
74.78
1.26
28.25
new
24,585
4.10 29,988
5.00
318,494
53.11
1.22
12.95
as English
8,956
1.49
9,607
1.60
54,286
9.05
1.07
6.06
es
known
17,464
6.79 22,045
8.57
276,586
107.50
1.26
15.84
new
14,973
5.82 17,329
6.73
105,516
41.01
1.16
7.05
as English
1,919
0.75
2,003
0.78
12,839
4.99
1.04
6.69
fr
known
17,121
2.84 22,984
3.82
489,760
81.30
1.34
28.61
new
22,580
3.75 25,825
4.29
311,403
51.69
1.14
13.79
as English
3,776
0.63
3,924
0.65
53,876
8.94
1.04
14.27
Patent
de
known
4,092
0.79
4,560
0.88
75,185
14.48
1.11
18.37
new
1,739
0.33
1,849
0.36
62,649
12.06
1.06
36.03
as English
457
0.09
464
0.09
7,284
1.40
1.02
15.94
fr
known
4,992
0.75
5,918
0.88
217,885
32.57
1.19
43.65
new
2,017
0.30
2,266
0.34
204,705
30.60
1.12
101.49
as English
702
0.10
750
0.11
78,634
11.75
1.07
112.01
all
de
known
17,102
1.29 21,851
1.64
646,703
48.66
1.28
37.81
new
25,436
1.91 31,050
2.34
469,118
35.30
1.22
18.44
as English
9,590
0.72 10,293
0.77
107,914
8.12
1.07
11.25
es
known
19,260
3.79 24,804
4.89
414,829
81.71
1.29
21.54
new
17,558
3.46 20,855
4.11
280,436
55.24
1.19
15.97
as English
2,734
0.54
2,872
0.57
47,294
9.32
1.05
17.30
fr
known
18,933
1.24 26,019
1.70
855,105
55.85
1.37
45.16
new
26,034
1.70 30,527
1.99
700,897
45.77
1.17
26.92
as English
4,600
0.30
4,835
0.32
189,969
12.41
1.05
41.30
Table 6 :
6This table gives an overview on the identification of terms in the non-English SSCs harmonized by majority voting, distinguishing three classes of annotations: known i.e.
http://www.mantra-project.eu/ 2 http://www.nlm.nih.gov/research/umls/
The MTR is accessible (UMLS licence restrictions apply) through the submission site of the CLEF-ER challenge: http: //www.clefer.org.
http://www.geneontology.org/GO.format. obo-1_2.shtml
The Unified Medical Language System (UMLS): integrating biomedical terminology. O Bodenreider, Nucleic Acids Res. 32Bodenreider, O. (2004). The Unified Medical Language System (UMLS): integrating biomedical terminology. Nucleic Acids Res., 32:D267-270, Jan.
The medical dictionary for regulatory activities (Med-DRA). Elliot G Brown, Louise Wood, Sue Wood, Drug Safety. 202Brown, Elliot G, Wood, Louise, and Wood, Sue. (1999). The medical dictionary for regulatory activities (Med- DRA). Drug Safety, 20(2):109-117.
The Gene Ontology Annotation (GOA) Database: sharing knowledge in Uniprot with Gene Ontology. Evelyn Camon, Magrane, Michele, Barrell, Daniel, Lee, Vivian, Dimmer, Emily, Maslen, John, Binns, David, Harte, Nicola, Rodrigo Lopez, Rolf Apweiler, Nucleic Acids Research. 32Camon, Evelyn, Magrane, Michele, Barrell, Daniel, Lee, Vivian, Dimmer, Emily, Maslen, John, Binns, David, Harte, Nicola, Lopez, Rodrigo, and Ap- weiler, Rolf. (2004). The Gene Ontology Annotation (GOA) Database: sharing knowledge in Uniprot with Gene Ontology. Nucleic Acids Research, 32(Database issue):D262-D266.
Phase II evaluation of clinical coding schemes: Completeness, taxonomy, mapping, definitions, and clarity. James R Campbell, Carpenter, Paul, Charles A Sneiderman, Cohn, Simon, Chute, G Christopher, Judith Warren, Cpri , Journal of the American Medical Informatics Association. 43Work Group on Codes and StructuresCampbell, James R., Carpenter, Paul, Sneiderman, Charles A., Cohn, Simon, Chute, Christopher G., War- ren, Judith, and CPRI Work Group on Codes and Struc- tures. (1997). Phase II evaluation of clinical coding schemes: Completeness, taxonomy, mapping, defini- tions, and clarity. Journal of the American Medical In- formatics Association, 4(3):238-250, May.
Deriving an english biomedical silver standard corpus for CLEF-ER. Ian Lewin, Simon Clematide, CLEF 2013 Evaluation Labs and Workshop, Online Working Notes. Valencia -SpainLewin, Ian and Clematide, Simon. (2013). Deriving an en- glish biomedical silver standard corpus for CLEF-ER. In CLEF 2013 Evaluation Labs and Workshop, Online Working Notes, 23-26 September, Valencia -Spain.
Centroids: Gold standards with distributional variations. Ian Lewin, Senay Kafkas, Dietrich Rebholz-Schuhmann, LREC '12 -Proceedings of the 2012. Lewin, Ian, Kafkas, Senay, and Rebholz-Schuhmann, Di- etrich. (2012). Centroids: Gold standards with distribu- tional variations. In LREC '12 -Proceedings of the 2012
Language Resources and Evaluation Conference, Istanbul. TurkeyLanguage Resources and Evaluation Conference, Istan- bul, Turkey.
. D Rebholz-Schuhmann, A J Yepes, E M Mulligen, Van, N Kang, J Kors, D Milward, P Corbett, E Buyko, E Beisswanger, U Hahn, Rebholz-Schuhmann, D., Yepes, A. J., Mulligen, E. M. Van, Kang, N., Kors, J., Milward, D., Corbett, P., Buyko, E., Beisswanger, E., and Hahn, U. (2010).
. CALBC silver standard corpus. Journal of Bioinformatics and Computational Biology. 81CALBC silver standard corpus. Journal of Bioinformat- ics and Computational Biology, 8(1):163-179, Feb.
Assessment of NER solutions against the first and second CALBC Silver Standard Corpus. D Rebholz-Schuhmann, A Jimeno-Yepes, C Li, S Kafkas, I Lewin, N Kang, P Corbett, D Milward, E Buyko, E Beisswanger, K Hornbostel, A Kouznetsov, R Witte, J B Laurila, C J O Baker, C.-J Kuo, S Clematide, F Rinaldi, R Farkas, G Mra, K Hara, L Furlong, M Rautschka, M Lara Neves, A Pascual-Montano, Q Wei, N Collier, Md F Mahbub Chowdhury, A Lavelli, R Berlanga, R Morante, V Van Asch, W Daelemans, J L Marina, E Van Mulligen, J Kors, U Hahn, Journal of Biomedical Semantics. 2511SupplRebholz-Schuhmann, D., Jimeno-Yepes, A., Li, C., Kafkas, S., Lewin, I., Kang, N., Corbett, P., Milward, D., Buyko, E., Beisswanger, E., Hornbostel, K., Kouznetsov, A., Witte, R., Laurila, J.B., Baker, C.J.O., Kuo, C.-J., Clematide, S., Rinaldi, F., Farkas, R., Mra, G., Hara, K., Furlong, L., Rautschka, M., Lara Neves, M., Pascual- Montano, A., Wei, Q., Collier, N., Mahbub Chowdhury, Md. F., Lavelli, A., Berlanga, R., Morante, R., Van Asch, V., Daelemans, W., Marina, J.L., van Mulligen, E., Kors, J., and Hahn, U. (2011). Assessment of NER solutions against the first and second CALBC Silver Standard Cor- pus. Journal of Biomedical Semantics, 2(Suppl 5):S11, Oct.
Multilingual semantic resources and parallel corpora in the biomedical domain: the CLEF-ER Challenge. Rebholz-Schuhmann, Dietrich, Clematide, Simon, Rinaldi, Fabio, Kafkas, Senay, Erik M Van Mulligen, Bui, Chinh, Hellrich, Johannes, Lewin, Ian, Milward, David, Poprat, Michael, Jimeno-Yepes, Antonio, Udo Hahn, Jan A Kors, CLEF 2013 Evaluation Labs and Workshop, Online Working Notes. Rebholz-Schuhmann, Dietrich, Clematide, Simon, Rinaldi, Fabio, Kafkas, Senay, van Mulligen, Erik M., Bui, Chinh, Hellrich, Johannes, Lewin, Ian, Milward, David, Poprat, Michael, Jimeno-Yepes, Antonio, Hahn, Udo, and Kors, Jan A. (2013a). Multilingual semantic re- sources and parallel corpora in the biomedical do- main: the CLEF-ER Challenge. In CLEF 2013 Evalu- ation Labs and Workshop, Online Working Notes, 23-26
. September, Valencia -Spain, September, Valencia -Spain.
Entity recognition in parallel multi-lingual biomedical corpora: the CLEF-ER laboratory overview. Rebholz-Schuhmann, Dietrich, Clematide, Simon, Rinaldi, Fabio, Kafkas, Senay, Erik M Van Mulligen, Bui, Chinh, Hellrich, Johannes, Lewin, Ian, Milward, David, Poprat, Michael, Jimeno-Yepes, Antonio, Udo Hahn, Jan A Kors, Forner, Pamela, Müller, Henning, Paredes, Roberto, Paolo Rosso, Stein, Information Access Evaluation. Multilinguality, Multimodality, and Visualization -4th International Conference of the CLEF Initiative, CLEF 2013, Valencia. SpainSpringerProceedingsRebholz-Schuhmann, Dietrich, Clematide, Simon, Rinaldi, Fabio, Kafkas, Senay, van Mulligen, Erik M., Bui, Chinh, Hellrich, Johannes, Lewin, Ian, Milward, David, Poprat, Michael, Jimeno-Yepes, Antonio, Hahn, Udo, and Kors, Jan A. (2013b). Entity recognition in par- allel multi-lingual biomedical corpora: the CLEF-ER laboratory overview. In Forner, Pamela, Müller, Hen- ning, Paredes, Roberto, Rosso, Paolo, and Stein, Benno, editors, Information Access Evaluation. Multilingual- ity, Multimodality, and Visualization -4th International Conference of the CLEF Initiative, CLEF 2013, Valen- cia, Spain, September 23-26, 2013. Proceedings, Lecture Notes in Computer Science, pages 353-367. Springer.
AMIA '98 -Proceedings of the 1998 AMIA Annual Fall Symposium. A Paradigm Shift in Health Care Information Systems: Clinical Infrastructures for the 21st Century. Kent A Spackman, Keith E Campbell, Chute, Christopher G., editorOrlando, FL; Philadelphia/PA. Hanley & BelfusCompositional concept representation using SNOMED: towards further convergence of clinical terminologiesSpackman, Kent A. and Campbell, Keith E. (1998). Com- positional concept representation using SNOMED: to- wards further convergence of clinical terminologies. In Chute, Christopher G., editor, AMIA '98 -Proceedings of the 1998 AMIA Annual Fall Symposium. A Paradigm Shift in Health Care Information Systems: Clinical In- frastructures for the 21st Century. Orlando, FL, Novem- ber 7-11, 1998, pages 740-744, Philadelphia/PA. Hanley & Belfus.
SNOMED Clinical Terms: overview of the development process and project status. Michael Q Stearns, Price, Colin, Spackman, A Kent, Amy Y Wang, Proceedings of the AMIA Symposium. the AMIA Symposium662Stearns, Michael Q, Price, Colin, Spackman, Kent A, and Wang, Amy Y. (2001). SNOMED Clinical Terms: overview of the development process and project sta- tus. In Proceedings of the AMIA Symposium, page 662. American Medical Informatics Association.
News from OPUS: A collection of multilingual parallel corpora with tools and interfaces. Jörg Tiedemann, Recent Advances in Natural Language Processing, volume V. Amsterdam, PhiladelphiaJohn BenjaminsTiedemann, Jörg. (2009). News from OPUS: A collec- tion of multilingual parallel corpora with tools and inter- faces. In Recent Advances in Natural Language Process- ing, volume V, pages 237-248. Amsterdam, Philadel- phia: John Benjamins. |
922,882 | [] | ø I ù Ð ú P û Ø ü p ý þ d ÿ ¡ £ ¢ ¥ ¤ § ¦ © p ÿ w ù § û ¦ " ! # ¦ % $ ª ÿ þ ì ÿ % B ÿ p ý & ¦ " ' # ) ( % 0 2 1 þ 3 5 4 7 6 5 ¤ ÿ 8 § ì ý & 9 ( 0 ¢ Y ù @ $ ù Ð ú P û Ø ü p ý þ d ÿ ¡ £ ¢ ¥ ¤ § ¦ © p ÿ A P ù § û ¦ Ü þ © $ 9 1 ý þ ì ÿ % B 3 d ý & ¦ 3 C 4 1 & § 6 D ¤ m ÿ Ý þ # $ 9 1 ý þ 9 ¤ & ¦ E ¢ l ÿ m þ F 0 ÿ 6 5 ¤ G H ) ¤ § ¦ Ä ý & 9 ( 0 t þ F ' ÿ ¦ F 0 ÿ p ý z þ d ÿ g ù Iú P R Q ú F S Ý ÿ % " ! % ¦ § ¢ © 6 ) G ) ¤ þ ª ÿ ¦ " 9 ¤ 2 T ' þ 9 1 V U W ù Ð ú P û Ø ü p ý þ d ÿ 7 6 # ¢ ê ÿ p ý § ¦ " ! % ¦ & ¢ d ÿ 6 D ¤ þ d ÿ & 6 F 9 ¤ Ý ÿ 6 X T R ¦ Y $ ù § û à ª ÿ þ F 0 ÿ b 1 $ C ( V ¢ c 9 ¤ & ¦ ì ý & 9 ( 0 g ù d $ ù Ð ú P û Ø ü p ý þ d ÿ ¡ £ ¢ ¥ ¤ §1 ý þ F 0 $ 6 ) ¦ 3 ) C T 1 z þ 9 ¤ % v ÿ ) 8 z þ F 0 # ! ¢ ¥ 1 § ¦ # ¦ 9 1 V ¢ © 6 l ÿ ¦ x w # ( h 6 F $ ! § 6 5 ¤ § G U y $ ù Ð ú P û Ø ü p ý þ d ÿ 7 6 # ¢ ! % ¦ § ¢ l ÿ 6 D ¤ þ ª ÿ 6 F 9 ¤ Ý ÿ & 6 g T R ¦ ø C $ ù § û à ª ÿ þ F 0 ÿ b 1 $ C ( V ¢ ê ÿ d ý 0 ¦ © ¦ 2 % ¦ F 0 T V 6 5 ¤ # ( t ÿ ¦ § ¢ Y ù â Ȳ º Å d ¿ µ © È ñ 9 t ³ ª ¾ ² À ó 9 Ï s ³ ª À º » ª Å ţ í · à ³ µ º » ö & Î § ® ḿ Ì° © ¼ 4 » ª µ © ½ i Å ª µ ¥ ³ ² z ® j ³ ª à u ¼ © ® ḿ · à j ¹ # D ¿ ° 4 · ¶ Ð ³ ª° s ± à t Á ² z ¿ m ¼ s ¼ 4 » é » ª ¼ © ® ḿ µ o° 4 » ª AE | ¼ r Ù r ³ µ © ¼ © ® ³ ¼ z À ± ¼ 4 µ ¥° s ¹ z ³ ¼ 4 ¼ © À ó ( µ © ² ' » ª µ © ¼ ©°¼ 4 » é · à ° © ¿ µ © ( ¼ © ® ³ ¼ Ì i » d ¾ È ¾ È ³ ª à z ¶ ḿ µ ¥° 9 ¾ È ³ ā ± à ¼ © ³ ā ± à ³ ² ² z µ © » ª ² µ ¥ ¢ Á ³ ¼ 4 u° ¥ º ¼ © ¿ ³ ¼ © º » d à ³ ª À 5 ³ 1 Ù g ³ µ ¥ · à ḿ ·° ©° Ø ³ ¹ # » d ¿ ¼ È ¼ © ® ḿ j ± ¾ È ² # » ª µ ¥ ¼ © ³ ª à p ¼´ Ò á · à ¼ ©° » ª AE Y ³ ª à Р» d à m Å ª » d à m Å È ¾ È ± À ± ± ¼ © ³ µ © ½ Í ³ a i ¼ © º » d à ΠW w d a R X E G R & Y $ Q S h p R z X B R z H a ª § ® ḿ t ¶ ±° © ¿ ° ¥° © º » d à £ » ª AE ° 4 ½ ° © ¼ 4 · ¾ µ © · è ¿ º µ © · ¾ È · à p ¼ ©° ê ¼ © ³ Â á ·°² z À ± ³ a í ± à ¼ © ® ḿ ş i » d à p ¼ 4 i Ë t ¼ P » ª AE ¼ © ® § ¹ µ ¥ ó AE ' ± à ¼ 4 µ ¥ ³ a i ¼ © º » d Ã Í Ù º ¼ © ® » d ¿ m µ 5 ¿ m µ © µ ¥ · à p ¼ g ² µ © » ª ¼ 4 » ª ¼½ ² ' 9° 4 ½ t° 4 ¼ 4 · ¾ ò Å d º Ò á · à é e g f i h D j l k C m o n g j l p V n g q C j s r t m v u i w C x D n g j t y { z k C h } | } n g q 5 j s x C p j t y ¢ ¡ ¤ £ ¥ § ¦ © " ! $ # % ' & ( ' ) 0 1 2 3 ¦ " & 5 4 7 6 8 ! 8 6 9 @ ) A B 1 D C ' ! E ¦ © § ® ḿ à ³ ª ¾ þ d ÿ % # ) ¤ " 8 R 6 $ © $ H $ © ¦ þ ! g ù b a Ñ ¼ © ® ḿ é ·° 4 ¼ © ³ ¹ À ± ±° © ® ḿ · ¶ ã i » d à ¼ 4 i Ë ¼ Ø° © ® m » d ¿ À ¶ î ¾ ² z À º ½ × ¼ © ® z ³ ¼ ¼ © ® ±°¯ ±° d z µ ¥° 4 ¼ 9 ² z À ± ³ ¼ 4 » » d Ã × » ª AE 5 ¶ ḿ · À ± ¼ © ³ Ð i » d ¾ ² z ³ ª à ½ ª Î % c AE 5 ³ ª ¾ Ì ¹ º Å d ¿ º ¼½ ³ µ ¥ ±° © ·° Ñ À ± ³ µ ¥ ³ ¼ © º » d Ã é ±° § à ḿ · ¶ · ¶ Î ¢ ¡ d e f ¦ g h ! 9 ' i q p " r ¦ " & t s u @ w v ¢ ¦ x ) y 1 2 ' ·° © ± ¶ ḿ ·° Ì ¼ © ® ḿ Í ¿ ° © µ · Ñ ¼ © ® Í ¶ ± ³ ª À º » ª Å u i » d à p ¼ 4 µ © » d À À ó µ Ì ¾ x ¿ ° 4 ¼ Ì ¹ ' ³ ¹ z À ó 1 4 » ¼ © ³  á t ³ ª ¶ m Ò l ³ ª à ¼ © ³ Å á » ª AE G° 4 ½ t° 4 ¼ 4 · ¾  t à m » 1 Ù À ó · ¶ m Å á ¹ z ³ ª° 4 ·° % ³ ª° Y à ḿ · ¶ ḿ · ¶ Î q c ! Ã Ì ¿ m ¼ 4 ¼ 4 µ ¥ ³ ª à z í 5 ï Ñ ª ³ s° 4 ½ t° 4 ¼ 4 · ¾  t à m » d Ù À ¢ Á´ · ¶ m Å á D ¹ z ³ ª° 4 D ¾ x ¿ ° 4 ¼ ¹ # Ḑ i » d à ° © ¿ À º ¼ 4 · ¶ Ð ¼ 4 » Ú ¶ ḿ ¼ 4 µ ¥ ¾ Í ± à ḿ o ¹ ' » ª ¼ © ® ¼ © ® ḿ 9 ¼½ ² ' 9 » ª AE % · à p ¼ © º ¼½ é ³ ª à ¶ i ¼ © ® ḿ o À º » ţ ³ ¼ © º » d Ã é » ª AE % ¼ © ® ḿ 9 · à ¼ © º ¼ r ½ ' % 9 ( 0 2 1 z þ 3 C 4 ä¯ º ¼ § ±° ³ Ȩ À ± ¿ ° 4 ¼ 4 µ » ª AE ¹ z ¿ z ± À ± ¶ ± à Šd° ae Ȭ ± à i » ª µ ¶ ḿ µ ¼ 4 » Ì ² z µ © » ª ² ' µ ¥ À º ½ x µ © ² µ © ·° 4 · à ¼ w ¼ © ® ḿ ş i » d ¾ È ¾ È ³ ª à ¶ µ · É Ê° $ i Ë t ² ' · i ¼ © ³ l Á ¼ © º » d à ± à x ¼ © ® ḿ 5° 4 ¼ 4 » ª µ ¥ ½ ª Î P å ¿ m µ Y » ª µ © Å d ³ ª à W V · ³ ¼ © ± » d à ( » ª AE  t à m » 1 Ù À ó · ¶ m Å á ¹ z ³ ª° 4 ·° ±° § ¶ ḿ ·° © i µ º ¹ ' · ¶ Ø Ã é° 4 · i ¼ © º » d à m Î ¢ ¡ 4 q ! 9 6 8 ¦ i q " ! $ ' p " ) y 6 E ¦ © § ® ḿ 3 ¿ m ¼ © À ± º ¼ r ½ u » ª AE U Å á ·° 4 ¼ © ¿ m µ ¥ ³ ª À à m ² z ¿ m ¼ B ± à i » d à l ç 4 ¿ z à i ¼ © º » d à ٠º ¼ © ® ° 4 ² ' · ¥ ® Ñ % ² z ³ µ © ¼ © ± ¿ z À ± ³ µ ¥ À º ½ u AE » ª µ x° 4 ² z ³ ¼ © ± ³ ª À Õ µ ¥ AE µ © · à z í ·° Ñ ® ³ ª° Í ¹ # · à ȩ À ó · ³ µ ¥ À º ½ ã ¶ ḿ · ¾ » d à z° 4 ¼ 4 µ ¥ ³ ¼ 4 · ¶ ä S å s Ò ³ ¼ 4 ¼ Ñ o ñ d ae Î ¼ 4 ¼ 4 µ ¥ ³ ª Ã í ·° w t ³ ª à ¶ Ǖ ± À À ± ¿ ° 4 ¼ 4 µ ¥ ³ ¼ 4 Ä i » d à ¼ 4 i Ë t ¼ ©° U Ù ® ḿ µ © ¾ x ¿ À º ¼ © ± ¾ » t ¶ ³ ª À é ± à ¼ 4 Å ª µ ¥ ³ ¼ © º » d à ±° ë à ḿ · ¶ ḿ · ¶ Î Ï s ½ à z ³ ª ¾ È ± ³ ª° ©° 4 » ţ ± ³ ¼ © º » d à £ » ª AE Ð Ã ³ ª ¾ È ·° U Ù º ¼ © ® À ± » ³ ¼ © º » d à z° B ä° © ¿ z ¥ ® ò ³ ª°3 d ý § ¦ § 3 5 4 © 1 & 6 5 ¤ t ÿ Ü þ # $ C 1 ý z þ 2 ) ¤ § ¦ ± à ¿ m ¼ 4 ¼ 4 µ ¥ ³ ª à z í ae ³ ª à ¶° 4 ² ' · z ³ ¼ © ± » d à ° Y » ª AE ' AE » ª µ ¥ í ¾ » 1 Ò á · ¾ · à ¼ ©° g ³ µ © Y ç © ¿ ° 4 ¼ Õ ¼ r Ù g » Ì » ª AE ¼ © ® ḿ w ¾ È ³ ª à ½ ² ' » d° ©° © º ¹ z À ± º ¼ © ó ·° z AE » ª µ I ¾ x ¿ À º ¼ © ¾ » ¶ z ³ ª À l à m ² z ¿ m ¼ Î Þ s à t Á » ª ¼ © ® ḿ µ ¿ à ¶ µ D ¶ ḿ Ò á · À º » ª ² ¾ · à p ¼ ± à z À ± ¿ ¶ ḿ ·° o° © ² # · z ³ ¼ © º » d à » ª AE Å á » ª Å ª µ ¥ ³ ² ® ± µ © Å d º » d à ° g AE | » ª µ P Ò ³ µ ¥ º » d ¿ ° P ² z ¿ m µ ¥ ² # » d° © ·° ä° © ¿ ® ³ ª° ¶ z º µ © · i ¼ © º » d à i » ª AE % · à ḿ · ¾ D ½ i ¾ È » 1 Ò á · ¾ · à ¼ Ñ ¶ ḿ ·° ¥ º Å d à ³ ¼ © º » d Ã é » ª AE ³ ª à m  t° Ñ Ṕ ¼ © 1 ae Î B Þ s Ã Ä ³ ª à ³ ª À º ½ t° © ±° ( » ª AE ³ Ò t ± ¶ ḿ » ª ¼ © ³ ² ' é » ª AE 9 ³ ª ô i Ë t µ ¥ ±° 4 Ú ² # µ ¥ AE » ª µ ¥ ¾ È · ¶ ¹ ½ µ ¥ ± Å d ³ ª ¶ ó µ ( í · à ḿ µ ¥ ³ ª À e d 9 · º ¼ © ® f » d À ± i » d ¾ Ì ¹ I Ñ Î # Î $ T u ³ µ ¥ ± à á g » ª µ © ² z° · Ñ P ß § ¼ Î º Ñ § ³ ª° Í ² z ³ µ © ¼ È » ª AE ³ î Ï § Þ ß à ' Þ á r à r å â i Ë t µ ¥ ±° 4 ª Ñ µ © Ò á · ³ ª À ó · ¶ 3 ¼ 4 » ª ¼ © ³ ª À » ª AE ³ ¼ r À ó · ³ ª° 4 ¼ h g Ì ¶ z j i # µ ¥ · à p ¼ g° 4 · ¾ È ³ ª à ¼ © ± À ³ ª° ©° 4 ·° g AE » ª µ g Å á ·° 4 ¼ © ¿ m µ ¥ ·° Î § ® ḿ P ³ ² z ³ ¹ z ± À ± ± ¼ r ½ AE | » ª µ ¾ x ¿ À º ¼ © ¾ » ¶ z ³ ª À ā ± à m ² z ¿ m ¼ ' AE | » ª µ ³ ª à ½ 9 ¾ È ³ ² ¹ z ³ ª° 4 · ¶ Ð ¼ © ³ ª° 4 Â é ±° ŕ ·° ©° 4 · à ¼ © ± ³ ª À b Î ¢ ¡ W k # % 0 ! E 6 9 l © 3 m § 6 8 n o # h & b ) y 0 x 1 2 ' Ö ® ± À ó P ¼ © ® ḿ 5° © ¥ ® ḿ · ¶ z ¿ À ó $ AE | » ª µ w ¾ È ± À º ¼ © ³ µ © ½ 9 ³ a i ¼ © º » d à ° % » ª AE | ¼ 4 · à ® ³ ª°² µ © · ° 4 D ¼ © ¾ È ± à m Å m Ñ # ¼ © ® ḿ ( ¼ © ± ¾ È ± Ã Å Ú ±° AE µ ¥ · è ¿ ḿ · à ¼ © À º ½ Ð µ ¥ · À ± ³ ¼ © º Ò á µ ¥ ³ ¼ © ® ḿ µ ( ¼ © ® ³ ª Ã î ³ ¹ z° 4 » d À ¿ m ¼ 4 ª Î â z ¿ m µ © ¼ © ® ḿ µ ¥ ¾ È » ª µ © ª Ñ Y ¼ © ® ḿ Ø ¿ z Ã í µ 4 Á ¼ © ³ ā ± à ¼ © ó ·° s » ª AE w ¾ È ± À ± ± ¼ © ³ µ © ½ é ³ a i ¼ © º » d à ° ä | » ª AE ¼ 4 · à µ © AE | µ © µ © · ¶ u ¼ 4 » é ³ ª°¼ © ® ḿ r X ! AE » ª Å Ø » ª AE Õ Ù g ³ µaae 5 Ò t º µ © ¼ © ¿ ³ ª À À º ½ Í · à ° ¥ ¿ m µ © ·° § ¼ © ® z ³ ¼ à m » Ú ² µ © i Á¸ ±° 4 D° © ¥ ® ḿ · ¶ z ¿ À ± ± à m Å Ù ± À ± À # Ò á µ ¹ # o ³ a ® ó Ò á · ¶ G ¶ ¿ m µ ± à m Å ( ¼ © ® ḿ ³ a i ¼ © ¿ ³ ª À ³ a i ¼ © º » d à Π3 á g » d à ° 4 · è ¿ ḿ · à ¼ © À º ½ ª Ñ ³ ª à ½ ì° 4 ½ t° 4 ¼ 4 · ¾ ¼ © ® ³ ¼ ³ ¼ 4 ¼ 4 · ¾ ² ¼ ©° o ¼ 4 » Ð ³ ª° ©° © ±° 4 ¼ 9 i » d ¾ È ¾ Í ³ ª à ¶ ḿ µ ¥° Ù ± ¼ © ® j ¼ © ® ḿ · ± µ o° © º ¼ © ¿ t Á ³ ¼ © º » d à ³ ª À I ³ · Ù r ³ µ © · à ḿ ·° ©° s ¾ D ¿ z° 4 ¼ 5 ¹ ' o ³ ¹ z À ó 9 ¼ 4 » È µ © · ³ ª° © » d à j ³ ¹ ' » d ¿ m ¼ ¼ © ® ḿ o µ ¥ · À ± ³ ¼ © º Ò á 9 ¼ © ± ¾ 9 » ª AE Ý Ò á · à ¼ ©° Î c ! Ã ë ¿ ¼ 4 ¼ 4 µ ¥ ³ ª à í r U » ª AE d Å d ¿ µ © U ñ Ñ § ¼ © ® u ¼ © ± ¾ È ū ± à ȩ i » d à t Á ¼ 4 i Ë t ¼ £ i » d ¾ ·° £ AE | µ © » d ¾ ¿ m ¼ 4 ¼ 4 µ ¥ ³ ª à í ï Ñ ¼ © ® 0 ¼ © ± ¾ ³ ¼ Ù ® ¥ ® ¶ ḿ · À º ¼ © 3 i » d ¾ ² ³ ª à p ½ Å d ³ ā ± à °² ' » d° © ± ¼ © º » d à ± à ¼ © ® ḿ AE | » d ¿ m µ i ² z ³ a ©  # Î p c ! à B ¿ ¼ 4 ¼ 4 µ ¥ ³ ª à í q U ¼ © ® ḿ G ¼ © ± ¾ µ © AE µ ¥ · Ã í ª Ñ 8 ý § ¦ ) ¤ þ © $ 9 1 ý z þ þ d ÿ 7 3 1 ý & ¦ 3 C 4 © 1 § § 6 D ¤ m ÿ µ ¥ AE µ ¥° Ì ¼ 4 » j ¼ © ® ḿ Í ¼ © ± ¾ Ù ® ḿ · Ã Ý ³ ª À ± ² z ® ³ ã i » d ¾ ² z ³ ª à ½ ë Å d ³ ā à ° i ² ' » d° © º ¼ © º » d à 3 ¼ Ð ¥ ® · ©  Á ² ' » d ± à ¼ r ³ ª À ± ² z ® ³ ( » d à ḿ ª Î Þ s à p ½ x° 4 ½ t° 4 ¼ 4 · ¾ ¶ ḿ · ³ ª À ± ± à m Å s Ù º ¼ © ® x i » » ª µ ¥ ¶ ± à ³ ¼ © ± » d Ã Ì » ª AE # i » d ¾ ( Á ² z À ó i Ë x° 4 · è ¿ · Ã í ·° Y » ª AE Ò á · à ¼ ©° w ¾ D ¿ z° 4 ¼ Y ¹ # ŗ ³ ² z ³ ¹ z À ± P » ª AE # ® ³ ª à t Á ¶ À ± à m Å j ² µ ¥ » d à m » d ¾ È ± à ³ ª À g ³ ª° ( Ù ǵ · À ± À r ³ ª° ¾ » ª µ © ȩ i » d ¾ ² z À ó i Ë ã À ± ± à t Á Å d ¿ ±° © ¼ © ± s µ © AE µ ¥ · Ã í ·° § ¼ 4 » È » ª ¼ © ® µ ¼ © ± ¾ ·° ³ ª à ¶ í Ò á · à p ¼ ©° · Î r ¬ a ª Q S w Q b a · q s i f i q l R z t × Q S q r F v ë s u F Õ $ R z § ® ḿ ± à ¼ 4 µ ¥ ³ a i ¼ © º » d Ã é ±° g ¹ z ³ ª° © · ¶ Ú » d à i ¼ © ® à » ª ¼ © º » d Ã Ø » ª AE ³ ª à 2 u w v ô 8 x E y xô÷ l Î s w ³ µ ¥ ± » d ¿ ° Õ ¼ r ½ ² ' ·° w » ª AE ' ³ a i ¼ © º Ò t º ¼ © ó ·° w ± à ( Ù ® ± ¥ ® Í ³ 9 ¾ È ± À ¢ Á¯ º ¼ © ³ µ © ½ ã AE | » ª µ ¥ í Ģ ³ ª Ã ì ¹ ' Ð · à m Å d ³ Å á · ¶ ` ± à z À ± ¿ ¶ ḿ Ð ¾ » d Ò á · ¾ · à ¼ Ñ ² ' » d° © º ¼ © º » d à ( ·° © ¼ © ³ ¹ z À ± ±° © ® z ¾ · à p ¼ Ñ d ³ ª à z ¶ È µ © · i » d à à z ³ ā ±° ©° © ³ ª Ã í ª Î Õ Þ s° ! Á° 4 » ţ ± ³ ¼ 4 · ¶ ã Ù º ¼ © ® ã ³ ª Ã ê ³ a i ¼ © º Ò t º ¼½ U ¼½ ² ' é ³ µ © i ² z ³ µ ¥ ³ ª ¾ ¼ 4 µ ° Ñ° 4 » d ¾ Ì » ª AE % Ù ® ± ¥ ® é ³ µ ¥ Ì ¾ È ³ ª à ¶ ³ ¼ 4 » ª µ ¥ ½ i ³ ª à z ¶ а 4 » d ¾ Ì ³ µ ¥ o » ª ² m Á ¼ © º » d à ³ ª À S Î ( ® ḿ È ¶ ± ³ ª À º » ª Å é° 4 ½ t° 4 ¼ 4 · ¾ Ù ± À ± À % ³ ¼ 4 ¼ 4 · ¾ ² ¼ o ¼ 4 » u i » d à t Á ¼ © ± à ¿ ḿ Ì ± à ¼ 4 µ ¥ ³ a i ¼ © º » d à ³ ¹ ' » d ¿ m ¼ § ¼ © ® ḿ x ¿ m µ © µ © · à ¼ ³ a i ¼ © ± Ò º ¼½ Ð ¿ à t Á ¼ © ± À Ò l ³ ª À ¿ ḿ ·° g AE » ª µ 5 ³ ª À ± À # ¾ È ³ ª à ¶ ³ ¼ 4 » ª µ ¥ ½ È ² z ³ µ ¥ ³ ª ¾ ¼ 4 µ ° 5 ³ µ © 9° © ¿ m ² m Á ² z À ± ± · ¶ Î Í § ® ° 9 ³ ² ² µ © » d ³ a ® Ū ±° Ì ³ ª à ± à ° © ¼ © ³ ª à p ¼ © ± ³ ¼ © ± » d à » ª AE 5 ¼ © ® ḿ T ū ±° ©° © ± à ŠG Þ Ë m º » d ¾ § ® ḿ » ª µ © ½ î » ª AE ¶ ± ³ ª À º » ª Å G ¼ © ® ³ ¼ È Ù ǵ j ® ³ 1 Ò á ¿ ° 4 · ¶ Ú ± Ã È ¼ © ® ² z ³ ª° 4 ¼ ä 4 ä g t ¾ È º ¼ © ® Í ¼ g ³ ª À S Î º Ñ # ñ R d ae P ³ ª à ¶ G ä ó µ 4 Á ¾ È ³ ª à à Р¼ ³ ª À b Î º Ñ ñ 2 g ª ae 4 ae Î c ! Ã × ¼ © ® ḿ Ø ² µ © » ª ¼ 4 » ª ¼½ ² ' Ø° 4 ½ ° © ¼ 4 · ¾ Ù ǵ Ú ® z ³ · Ò á i AE » ţ ¿ ° 4 · ¶ » d à ¾ » d Ò á · ¾ · à p ¼ Ø ³ a i ¼ © º Ò t º ¼ © ó ·° Î ê § ® ḿ µ ¥ i ³ µ ¥ j ³ G ¼ 4 » ª ¼ © ³ ª À § » ª AE Ò á µ © · è ¿ º µ © · ¶ Ð ² z ³ µ ¥ ³ ª
( 1 0 3 2 ¢ 42 6 5 ) X " ) X "
( W 0 3 2 ¢ 42 6 5 G ¡ Ỳ B © G ¡ Ỳ B © a c b S a c b S ! # " ( W 0 V 2 6 42 5 ! # " Q ¥ 42 6 5 D 2 § ¢ © 0 V 2 6 44¡ 0 ( W 0 3 2 ¢ 42 6 5 d ( W 0 V 2 6 42 5 ) X " B © 2 ¢ 0 3 R ¢ ¡ 0 V C ¡ 0 ) X " ) X " G ¡ Ỳ B © a c b e S ) X " I P 2 ¥ 4 f © V © V ¡ 0 ¥ § g ' ¡ G ¡ Ỳ B © h U © U i § g p 8 4¡ ) X " â Ȳ º Å d ¿ m µ © Ì ï # ½ ° © ¼ 4 · ¾ Þ µ ¥ ® º ¼ 4 · i ¼ © ¿ µ ©¯ ±° ¶ ±° © ¿ ° ¥° © ± à m Å r » d à À º ½ » d à ḿ P ¹ z ³ ¼ 4 ¼ © ³ ª À ± º » d Ã Ñ d ¼ © ® ḿ µ ¥ P ±° I » d à À º ½ » d à ḿ¸ i » d ¾ ² z ³ ª à ½ à ³ ª ¾ · ¶ U ¶ ḿ · À º ¼ © ³ X q % ¼ © ® ¿ ° · Ñ % º ¼ ō ° o ³ Ð ¿ Ã è ¿ ḿ ( µ © AE Á´ µ © · Ã í ª Î s § ® ḿ D · à ¶ u À ± » ³ ¼ © º » d à ±°' # ) ( % 0 2 1 þ 3 5 4 ³ ª à ¶ u ¼ © ® ḿ´ · à ¶ G ¼ © ± ¾ È x ±° ¼Ù P » Ð ® » d ¿ m µ ¥° ³ AE ¼ 4 µ Ì ¼ © ® ḿ ° 4 ¼ © ³ µ © ¼ Î t ± à í ( ³ ª À ± À ¾ È ³ ª à ¶ z ³ ¼ 4 » ª µ © ½ ² z ³ µ ¥ ³ ª ¾ ¼ 4 µ ° g ³ µ ¥ s° 4 ² ' · ź · ¶ Ñ d ¼ © ® ḿ ° 4 ½ t° 4 ¼ 4 · ¾° © ± ¾ È ² z À º ½ ī ±° ©° © ¿ ḿ ·° s ³ Ú Å á · à ḿ µ ¥ ± x è ¿ µ © ½ ª Î s â z ¿ m ¼ © ¿ m µ © x Ù P » ª µ ¥  Р٠± À ± À¯ ± à À ¿ ¶ ḿ r ¶ ḿ · ³ ª À ± ± à m Å Ù ± ¼ © ® ±° ©° © ¿ ḿ ·° Y i » d Ã í µ à ± à m Å o i » d à z µ ¥ ¾ È ³ l Á ¼ © º » d à Р³ ª à z ¶ é Ò á µ ¥ z ³ ¼ © ± » d à Π¼ 4 ¼ 4 µ ¥ ³ ª Ã í § F ¦ Ý þ © $ 9 1 ý þ ë ÿ B 3 1 ý & ¦ 3 C 4 © 1 § § 6 D ¤ m ÿ þ # $ C 1 ý z þ 2 ) ¤ § ¦ I s r C t ú P Q ª ú F S h U 2 ¢ l ÿ m þ F 0 ÿ 6 5 ¤ G H ) ¤ § ¦ ý & 9 ( 0 t þ F ' ÿ ¦ F 0 ÿ p ý z þ d ÿ g ù á g » d à ¼ 4 i Ë t ¼ y ² z µ © » 1 Ò t ± ¶ ḿ ·° ȳ ± à º ¼ © ³ ª À × À º » ţ ³ ¼ © º » d à » ª AE ë ³ ª À º ² z ® ³¸ i » d ¾ ² z ³ ª à ½ ª Î B ¼ © ³ µ © ¼ Ú ¼ © ± ¾ j ° È° 4 ² ' · · ¶ Ä À ± à m Å d ¿ ±° © ¼ © ± ³ ª À ± À º ½ Ù ® À ó ã ¼ © ® ḿ ì · à ¶ À º » ³ ¼ © ± » d à ±° ×° © ² # · · ¶ 3 ¹ ½ ¾ È » d ¿ ° 4¸ À ± ± ¥ Â Ñ ã ¹ z ¿ ¼ Ǘ · à ¶ ¼ © ¾ ±° t à m » ª ¼ Ñ ê Ù ® ± ® À ó · ³ ª ¶ °¼ 4 » ¿ m ¼ 4 ¼ 4 µ ¥ ³ ª à z í R Ñü ý z þ d ÿ 7 6 % ¢ ê ÿ d ý & ¦ " ! # ¦ & ¢ l ÿ & 6 5 ¤ z þ d ÿ 6 ) ) ¤ B ÿ 6 X T R ¦ § c ! à Р¼ © ® ḿ o µ ¥ ·° 4 ² ' » d à ° 4 d ÿ m þ F 0 ÿ c 1 $ 9 ( h ¢ c ) ¤ § ¦ Ä ý |
||
7,353,201 | Discovering Correction Rules for Auto Editing | This paper describes a framework that extracts effective correction rules from a sentence-aligned corpus and shows a practical application: auto-editing using the discovered rules. The framework exploits the methodology of finding the Levenshtein distance between sentences to identify the key parts of the rules and uses the editing corpus to filter, condense, and refine the rules. We have produced the rule candidates of such form, A B, where A stands for the erroneous pattern and B for the correct pattern.The developed framework is language independent; therefore, it can be applied to other languages. The evaluation of the discovered rules reveals that 67.2% of the top 1500 ranked rules are annotated as correct or mostly correct by experts. Based on the rules, we have developed an online auto-editing system for demonstration at http://ppt.cc/02yY. | [
757808,
16463625,
8494338
] | Discovering Correction Rules for Auto Editing
An-Ta Huang
Tsung-Ting Kuo
Ying-Chun Lai
Shou-De Lin
Discovering Correction Rules for Auto Editing
Computational Linguistics and Chinese Language Processing Vol. 15, No. 3-4, September/December 2010, pp. 219-236 219Edit DistanceErroneous PatternCorrection RrulesAuto Editing
This paper describes a framework that extracts effective correction rules from a sentence-aligned corpus and shows a practical application: auto-editing using the discovered rules. The framework exploits the methodology of finding the Levenshtein distance between sentences to identify the key parts of the rules and uses the editing corpus to filter, condense, and refine the rules. We have produced the rule candidates of such form, A B, where A stands for the erroneous pattern and B for the correct pattern.The developed framework is language independent; therefore, it can be applied to other languages. The evaluation of the discovered rules reveals that 67.2% of the top 1500 ranked rules are annotated as correct or mostly correct by experts. Based on the rules, we have developed an online auto-editing system for demonstration at http://ppt.cc/02yY.
Introduction
Nowadays, people write blogs, diaries, and reports not only in their native language but sometimes in a language they are not that familiar with. During the process of writing, second/foreign language learners might make some errors, such as in spelling, grammar, and lexical usage. Therefore, how to provide editorial assistance automatically and effectively has become an important and practical research issue for NLP (Natural Language Processing) researchers. For second/foreign language learners, providing instant responses to their writing, indicating which part might be incorrect, and offering auto-editing suggestions for them to choose from would be beneficial for the improvement of their writing and other aspects of language development.
Editing plays an important part in language learning. It can be classified into human editing and machine editing. Human editing has some limitations. Human editing is inefficient when the size of the edited articles becomes large, and it is inconvenient sometimes for people who need this service for their daily documents, like diaries, letters, and emails. Besides, human editing involves subjective opinions, which are different from the machine editing strategy that relies mostly on the objective empirical outcomes.
Despite the growing demand of editorial assistance tools, the existing ones still have considerable room for improvement. For example, the grammar checker provided by Microsoft Word has known deficiencies of being language dependent and covering only a small portion of errors without explicitly revealing the correction mechanism.
Given the importance of the need to develop editing tools, a new editing system is proposed. The current research demonstrates an auto-editing system based on the correction rules mined from online editing websites. In this paper, we focus on two research goals. First, we aim to design a strategy that identifies effective rules automatically and efficiently from editing databases. Second, we aim to design an auto-editing system based on the discovered rules.
Our method is language independent; therefore, it can be applied easily to other languages. Our evaluation reveals that, among the top 1500 rules the system found, 67.2% of them are regarded as correct or mostly correct.
The remainder of the paper is organized as follows. Section 2 describes the related work on detecting erroneous patterns. Section 3 lays out our methodology. Section 4 describes the experiment and our demo system. Section 5 concludes our study.
Related Works
Previous approaches can be classified into two categories. The first category detects erroneous patterns based on rules, and the second category makes use of statistical techniques for such a purpose.
Knowledge-Based Method
Some methods detecting erroneous patterns based on the manually created rules are proven to be effective in detecting grammar errors (Heidorn, 2000). Michaud, McCoy, & Pennington (2000) developed a system, including an error identification model and response generation model, using knowledge bases that cover general information about analyzing grammar structure and specific information of a user's learning history. Also, Dan, Flickinger, Oepen, Walsh, & Baldwin (2004) presented a tutorial system based on computational grammar augmented with mal-rules for analysis, error diagnosis, and semantics-centered generation of correct forms. Nevertheless, the manually designed rules generally consume labor and time, along with requiring language experts, which limit the generalization capability of such methods. Furthermore, manually designed rules can hardly be applied to different languages.
Statistical Techniques
As discussed in Section 2.1, rule-based methods have some apparent shortcomings. Rather than asking experts to annotate a corpus, some researchers have proposed statistical models to identify erroneous patterns. An unsupervised method to detect grammatical errors by inferring negative evidence reached 80% precision and 20% recall (Chodorow & Leacock, 2000). It is reported that this system is only effective in recognizing certain grammatical errors and detects only about one-fifth as many errors as a human judge does. Some other papers focus on detecting particular errors, such as preposition errors (Hermet & Desilets, 2009), disagreement on the quantifier and misuse of the noun (Brocket, Dolan, & Gamon, 2006). treat the detection of erroneous sentences as a binary classification problem and propose a new feature called "Labeled Sequential Patterns" (LSP) for this purpose. This feature is compared to the other four features, including two scores produced by a toolkit, lexical collocation (Yajuan & Ming, 2004), and function word density. The results show that the average accuracy of LSP (79.63%) outperforms the other four features. Furthermore, the existence of the time words and function words in a sentence is proven to be important. In this way, one can only know whether a sentence is correct or not and would not have a clue about how to correct errors. Finally, some researchers have modeled detection of erroneous patterns as a statistical machine translation problem treating the erroneous sentences and the correct sentences as two different languages. Nevertheless, error correction could be intrinsically different from translation and there is no apparent evidence whether the existing machine translation techniques are suitable for such purpose (Guihua, Gao, Xiaohua, Chin-Yew, & Ming, 2007;Shi & Zhou, 2005).
Our work is different from the previous ones in two major respects. First, we treat error detection as a pattern mining problem to extract effective rules from an editing corpus. Second, we focus on designing a language-independent system that avoids using some language-specific features, such as not using any contextual, syntactic, or grammatical information, in this paper. Figure 1 shows that our framework consists of two parts. It produces some raw rules in the first stage and tries to refine them in the next stage.
Methodology
Overview
Figure 1. System Overview
Corpus Description
We retrieved 310967 parallel pairs of sentences (i.e. each pair consists of one erroneous sentence and one correct sentence) from an online-editing website Lang-8 (http://lang-8.com/). The website allows people to write diaries in their second/foreign language and the diaries (which usually contain some mistakes) would be edited by some volunteer members who are native speakers of the corresponding language. The edited part in an article is restricted to a single sentence (not cross-sentential). Consequently, we could retrieve the sentence-aligned data through crawling the website.
In the following sections, we use "W i " to represent the erroneous sentence of the i-th pair of sentence in the corpus and "C i " to represent the corresponding correct sentence. S + is defined as a collection of all correct sentences in the corpus, while S-is defined as a collection of all erroneous sentences.
Producing Rules
The following are some definitions of erroneous and correct patterns, rules, applying rules, and frequency of patterns:
Definition: (erroneous and correct) patterns: A pattern is a series of consecutive words (or characters) that belong to a subsequence of a sentence. An erroneous pattern represents such a sequence that is believed to be wrong, and a correct pattern is one that is believed to be correct.
Definition: a rule: A rule K can be written as K L => K R . The left-hand side of the arrow, K L , is an erroneous pattern and the right-hand side of the arrow, K R , is the correct pattern which K L should be transformed to.
Definition: applying a rule to a sentence: Given a rule K : K L => K R , and a sentence T, if K L exist in the T, we replaced every possible place of K L in T to K R . Such a process is considered as "applying rule K to a sentence T." Definition: fre S+ (K L ): the occurrence frequency of a pattern K L in corpus S + To discover a rule A B from the editing corpus, we first had to identify the plausible left and right hand side of the rule. This is by no means a trivial task, and the fact that there could be various choices of such a rule made the task even more difficult. One intuitive method was to compare the word set existing in W i and C i and create the patterns using the difference among them. Nevertheless, such an intuitive method suffers certain deficiencies, such as the ones that appear in the following example.
Erroneous: "I with him had dinner." Correct: "I had dinner with him."
The difference set is an empty set since the order is not considered. It is not clear how this difference set can lead to both erroneous and correct patterns. The approach we proposed was to exploit the procedure of calculating the word-level Levenshtein distance, which is often called editing distance (Levenshtein, 1966). The Levenshtein distance is defined as the minimum number of edits needed to transform one string into the other, with the allowable edit operations being insertion, deletion, or substitution of a single character (Levenshtein, n.d.). Similarly, the edit distance between two sentences can be defined as the minimum number of allowable operations required to transform from one of them into the other, given each unit of transformation being based on words rather than characters.
The insert operation inserts a word X into the erroneous sentence, which implies there is a word X that has the potential to be involved in the correct pattern K R for a rule K L K R . Similarly, the delete operation removes one word Y from the erroneous sentence to become the correct one, and this word Y is likely to be involved in the erroneous pattern K L . Finally, when a substitute operation is performed, the word to be replaced should appear in K L while the replacing word shall be involved in K R . Here, we argue that the words run through the editing-distance process from an erroneous to a correct sentence have a higher chance to be involved in the patterns of rules. For example, if we apply an editing distance approach to the following sentence pairs, multiple outputs can be acquired, such as the ones shown in Table 1 and Table 2. Levenshtein distance could calculate the difference between sentences, and we believe that rules are based on the differences.
Erroneous: "I still don't know where is it in the movie."
Correct: "I still don't understand where it is in the movie."
Based on the two editing-distance results shown in Table 1 and 2, it is possible to obtain that the four words {it, is, know, understand} are plausible words to appear in the rule K L K R . For each pair of W i and C i , we can collect all of the involved words after producing the Levenshtein distance. Figure 2 shows the pseudo code. We exploited a dynamic programming approach to improve its efficiency.
Figure 2. Pseudo code of producing rules
After applying the modified Levenshtein distance algorithm, it is possible to obtain a set of involving words R i , as shown below.
{is , it ,understand , know} R i = To form a reasonable pattern, however, the words in set R i are not sufficient. They should be combined with other terms. Ideally, K L and K R must consist of some words from R i and some from the rest of the sentence. Therefore, for each pair of W i and C i in the corpus, we retrieved consecutive word patterns in which at least one word was from R i . Based on R i , the following examples are rule candidates.
it is it is in is in the I still don't understand still don't understand where don't understand where it understand where it is where it is in it is in the is in the movie
Next, we matched each plausible candidate for K L to each candidate for K R to form a plausible rule (Table 3). For each plausible rule, we then checked its feasibility by applying it to W i to see if the correct sentence C i could be produced. The infeasible rules would be ignored.
Definition of feasible rule: Given a rule K : K L => K R . In a corpus, if at least one erroneous sentence in the corpus can be corrected using K, then K is considered a feasible rule.
Refining Rules
So far, we have generated several rules, some of which make sense and some of which might not. In this section, we describe how to assess the quality of the rules and how to refine them. We believe the erroneous patterns K L should not occur in the correct sentences too frequently (otherwise it would have been replaced by the correct one K R ); therefore, we considered fre S+ as a suitable metric to evaluate the quality of a rule. According to the real experiment shown in Table 4, the frequency of the erroneous patterns seems to be lower in the correct corpus, fre S+ , compared to the correct ones.
Next, we condensed the rules according to their fre S+ . The condensed rule is shorter than the original one and is supposed to be more general (i.e. can cover more sentences). For example, in the following sentences, the condensed rule is more general and reasonable since the subject 'I' has nothing to do with the erroneous pattern.
Erroneous: "I went to shopping and had dinner with my friend yesterday."
Correct: "I went shopping and had dinner with my friend yesterday."
Rule: "I went to shopping." => "I went shopping." Condensed Rule: "went to shopping" => "went shopping"
To obtain the shortest possible rules for auto-editing, we proposed a simple idea to check if the left hand side K L could be condensed to a shorter one, without boosting its fre S+ significantly. If yes, then it implied we had found a shorter erroneous pattern that also occurred rarely in the correct corpus. For example, for the erroneous pattern "I am surprised at." Table 5 shows the frequency of each possible subsequence in the correct corpus. Apparently "am surprised at" is the most condensed rule that does not occur more than ten times in the correct corpus. What follows here is the algorithm for rule condensing. If the frequency of the condensed erroneous rule is smaller than an empirically-defined threshold frequency N condense , we will accept it as a condensed erroneous pattern. Then, we remove the same words from the K R to produce the corresponding correct pattern. The condensing process repeats until any of the words to be removed in K L do not occur in the K R . The pseudo code of condensing rules is shown in Figure 3.
Figure 3. Pseudo code of condensing rules
The final step of the refinement is to rank the rules based on their qualities. We proposed two plausible strategies to rank the rules. First, it is possible to rank the rules according to fre S+ (K L ) from low to high. In other words, a rule is less likely to incorrectly modify something right into something wrong if its fre S+ is low. Second, it is possible to rank the rules according to the number of sentences in the corpus that can be applied using it. The first strategy is similar to the definition of precision while the second is closer to the meaning of recall.
Experiments
We set N condense as 10 and retrieved 310967 pairs of English sentences from the "Lang-8" as our parallel corpus, and the system finally generated 110567 rules. To evaluate the framework, four experts were invited to annotate the rules. Then, we demonstrated an auto-editing system to show how such rules can be applied.
Evaluation
We ranked all of the rules according to their fre S+, and four English majors were invited to annotate the top 1500 ranked rules. Each rule was annotated by two persons. The labels for annotations were "correct," "mostly correct," "mostly wrong," "wrong," and "depends on context". Table 6 presents the experimental results and Figure 4 presents the evaluation system screenshot. A fair agreement was found between the two annotations, as the kappa value equals 0.49835.
Figure 4. Screenshot of Evaluation System
We also compared our system (using all rules or highly ranked rules), with the other two available auto-editing systems, ESL Assistant and Microsoft Word Grammar Checker. The highly ranked rules were those with fre S+ (K L ) smaller than 10. We retrieved 30 articles randomly from lang-8 that did not appear in our training corpus and examined their correction on the website as the gold standard. Table 7 shows the sentence-based recall and precision values.
Discussion
Manual analysis of the rules was performed as well. As seen in Table 8, the results show that most of the corrections (67% of rules) are about spelling errors, collocation and phrase, and agreement of subject and verb. It is also noted that most of the incorrect rules would lead to false suggestions and 83% of the rules belonging to "depend on context" category are about chunks and phrases. Figure 5 shows the rule distribution. Table 9 lists some example rules discovered by our system that can hardly be detected and corrected by Microsoft Word 2007 grammar checker.
Figure 5. Rule distribution
Auto-editing System
We constructed an online, real-time auto-editing system and demonstrated the usefulness of our rules, which aimed to provide editorial assistance. We first tried to test whether a part of the real-time typing sentence could match the erroneous patterns. If there was a match, the chunk would be marked in red, and we applied the correction rule to suggest replacing it with the correct pattern. The user(s) was able to click the correct part (marked in green) to tell the system the given correction was accepted, and the system automatically made the change. The link to our system is: http://mslab.csie.ntu.edu.tw/~kw/new_demo.html.
Auto Editing
Figure 6. Screenshot of demo system
Figure 6 is the entire system view. Two kinds of rule sets can be exploited: (1) "Highly-ranked Rules" exploits only higher ranked rules and ignores lower-ranked ones; (2) "All Rules" utilizes every rule but suffers the risk of utilizing incorrect ones.
Figure 7. Screenshot of auto-editing
In Figure 7 shows one can type sentences in English in edit area. If any of the rules is matched, the suggested correction will appear on the above area in green. If the users agree with the corrections, they can click on the green word and the sentence will be edited accordingly.
Rules Keyword Search
Figure 8. Screenshot of keywords search in rule database
On the right hand side of the page (Figure 8), the user can type a keyword to search for the related rules. Then, the system would demonstrate all of the discovered rules relevant to this keyword. The above screenshot shows the rules relevant to the keyword "course".
User Correction Feedback
When a user chooses a correction option from editing results, we could assume the rule receives one additional endorsement. Such information can be exploited to refine the rules. Therefore, we maintain the user feedback and use such feedback to adjust the rank of the rules. Highly endorsed rules will be promoted gradually in the ranking.
Conclusion
In this research, we propose a language-universal framework that is capable of producing effective editing rules. The quality of rules can be assessed using the proposed ranking strategies. Moreover, we have demonstrated the practical usage of the rules by constructing an auto-editing system to provide editorial assistance for language learners. In this paper, we demonstrated how we produced correction rules without considering syntactic structure and POS (Part-of-Speech). In the future, we would like to make use of both of the features to improve the performance of our system.
Table 1 .
1One of the editing results for edit distanceOperation
Position
Involved word
Insert
6
It
Delete
8
It
Substitute
4
know→understand
Table 2. Another editing result for edit distance
Operation
Position
Involved word
Insert
8
Is
Delete
6
Is
Substitute
4
know→understand
Table 3 .
3Pattern candidates for forming a ruleCandidates for K L
(Word length ≤ 4)
Candidates for K R
(Word length ≤ 4)
don't know
know where
where is
is it
it in
still don't know
don't know where
know where is
where is it
is it in
it in the
I still don't know
still don't know where
don't know where is
know where is it
where is it in
is it in the
it in the movie
don't understand
understand where
where it
it is
is in
still don't understand
don't understand where
understand where it
where
Table 4 .
4Observation on the frequencyPattern
fre S+
Pattern
fre S+
Erroneous Went to shopping
10
Erroneous
am so exciting
0
Correct
went shopping
205
Correct
am so excited
71
Table 5 .
5An example for condensing a ruleSentence
segment
surprised
surprised at
am surprised
am surprised
at
I am
surprised at
Frequency
985
702
213
10
10
Table 6 .
6The Distribution of annotated results of the top 1500 rulesCorrect
Mostly correct Mostly wrong
Wrong
Depends on context
R1~R1500
53.96%
12.96%
0.92%
4.5%
27.66%
Table 7 .
7Evaluation results with 95% confidenceSystem
Recall
Precision
Our Auto-Editing
System(All Rules)
20.28%±1.07%
40.16%±0.6%
Our Auto-Editing System
(Highly Ranked Rules)
14.28%±0.74%
77.32%±0.55%
ESL Assistant (Claudia,
Michael, & Chris, 2009)
18.4%±1.07%
42.36%±0.29%
Microsoft Word
Grammar Checker
14.28%±0.72%
27.77%±1.03%
Table 8 .
8Manual analysis of rules Errors not to be spotted and corrected 3% III. Depends on Context and/or Writers' Intention (32.1% of Rules) % 1. Correctness of the chunks/phrases 83% 2. Verbal and verb tense 5 % 3. Spelling (more than one possibility) 3% 4. Word choice 2% 5. Others (use of preposition, conjunction, cohesive devices, parts of speech…etc.) 7%I. Correct & Mostly Correct (67% of Rules)
%
1. Spelling
60%
2. Collocation and phrase (sequence of words which co-occur more often than would
be expected by chance
15%
3. Agreement of subject and verb
7%
4. Choice of verb tense
5%
5. Gerund forms and infinitives
2%
6. Choice of the proper article
1%
7. Pluralization (irregular noun)
1%
8. Capitalization (use of capital letter)
1%
9. Other (use of preposition, word choice, cohesive devices, elliptical forms,
punctuation, parts of speech, count and noncount nouns…etc.)
8%
II. Wrong & Mostly Wrong (0.9% of Rules)
%
1. Suggestions of wrong corrections
97%
2.
Table 9 .
9Example rules discovered by the proposed systemExample rules am worry about => am worried about help me to study => help me study I will appreciate it => I would appreciate it went to shopping => went shopping am so exciting => am so excited waked => woke look forward to read => look forward to reading for read my => for reading my The street name => The street's name to playing with => to play with He promised to me => He promised me asked repeat => repeatedly asked Have you listen to => Have you listened to It's rains => It's raining I ate a milk => I had milk for the long time => for a long time don't cooking => don't cook will success => will succeed don't know what happen => don't know what happened
Intelligent Writing Assistance. E Heidorn, Handbook of Natural Language Processing. Robert, D., Hermann, M., & Harold, S.New YorkMarcel DekkerHeidorn, E. (2000). Intelligent Writing Assistance. in Robert, D., Hermann, M., & Harold, S.(eds.), Handbook of Natural Language Processing. New York: Marcel Dekker.
An Intelligent Tutoring System for Deaf Learners of Written English. L Michaud, K Mccoy, C Pennington, Proceeding of Fourth International ACM Conference on Assistive Technologies. eeding of Fourth International ACM Conference on Assistive TechnologiesMichaud, L., McCoy, K., & Pennington, C. (2000). An Intelligent Tutoring System for Deaf Learners of Written English. Proceeding of Fourth International ACM Conference on Assistive Technologies, 92-100.
Arboretum: Using a precision grammar for grammar checking in call. E Dan, D Flickinger, S Oepen, A Walsh, T Baldwin, Proceedings of the InSTIL/ICALL Symposium: NLP and Speech Technologies in Advanced Language Learning Systems. the InSTIL/ICALL Symposium: NLP and Speech Technologies in Advanced Language Learning SystemsDan, E., Flickinger, D., Oepen, S., Walsh, A., & Baldwin, T. (2004). Arboretum: Using a precision grammar for grammar checking in call. In Proceedings of the InSTIL/ICALL Symposium: NLP and Speech Technologies in Advanced Language Learning Systems.
An Unsupervised Method for Detecting Grammatical Errors. M Chodorow, C Leacock, Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference. the 1st North American chapter of the Association for Computational Linguistics conferenceChodorow, M., & Leacock, C. (2000). An Unsupervised Method for Detecting Grammatical Errors. Proceedings of the 1st North American chapter of the Association for Computational Linguistics conference, 140-147.
Using First and Second Language Models to Correct Preposition Errors in Second Language Authoring. M Hermet, A Desilets, Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications. the Fourth Workshop on Innovative Use of NLP for Building Educational ApplicationsHermet, M., & Desilets, A. (2009). Using First and Second Language Models to Correct Preposition Errors in Second Language Authoring. Proceedings of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications, 64-72.
Correcting ESL errors using phrasal SMT techniques. C Brocket, W Dolan, M Gamon, Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics. the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational LinguisticsBrocket, C., Dolan, W., & Gamon, M. (2006). Correcting ESL errors using phrasal SMT techniques. Proceedings of the 21st International Conference on Computational Linguistics and the 44th annual meeting of the Association for Computational Linguistics, 249-256.
Detecting Erroneous Sentences Using Automatically Mined Sequential Patterns. G Sun, X Liu, G Cong, M Zhou, Z Xiong, C Y Lin, J Lee, Proceeding of the 45 th annual meeting of the Association of Computational Linguistics. eeding of the 45 th annual meeting of the Association of Computational LinguisticsSun,G., Liu, X., Cong, G., Zhou, M., Xiong, Z., Lin, C. Y., & Lee, J., (2007). Detecting Erroneous Sentences Using Automatically Mined Sequential Patterns. In Proceeding of the 45 th annual meeting of the Association of Computational Linguistics, 81-88.
Mining Sequential Patterns and Tree Patterns to Detect Erroneous Sentences. G Sun, G Cong, X Liu, C.-Y Lin, M Zhou, Proceedings of the 22nd national conference on Artificial intelligence. the 22nd national conference on Artificial intelligence1Sun, G., Cong, G., Liu, X., Lin, C.-Y., & Zhou, M. (2007). Mining Sequential Patterns and Tree Patterns to Detect Erroneous Sentences. Proceedings of the 22nd national conference on Artificial intelligence -Volume 1. 925-930.
Error Detection Using Linguistic Features. Y Shi, L Zhou, Proceeding of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing. eeding of Human Language Technology Conference and Conference on Empirical Methods in Natural Language essingShi,Y., & Zhou, L. (2005). Error Detection Using Linguistic Features. Proceeding of Human Language Technology Conference and Conference on Empirical Methods in Natural Language Processing, 41-48.
Binary codes capable of correcting deletions, insertions and reversals. V I Levenshtein, Soviet Physics Doklady. 108Levenshtein, VI. (1966). Binary codes capable of correcting deletions, insertions and reversals. Soviet Physics Doklady, 10(8), 707-710.
Collocation translation acquisition using monolingual corpora. Y Lü, M Zhou, Proceeding of Association for Computational Linguistics. eeding of Association for Computational LinguisticsLü, Y., & Zhou, M. (2004). Collocation translation acquisition using monolingual corpora. In Proceeding of Association for Computational Linguistics.
C Leacock, M Gamon, C Brockett, User Input and Interactions on Microsoft Research ESL Assistant. Proceeding of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications. Leacock, C., Gamon, M., & Brockett, C.(2009). User Input and Interactions on Microsoft Research ESL Assistant. Proceeding of the Fourth Workshop on Innovative Use of NLP for Building Educational Applications, 73-81.
Levenshtein, Retrieved from the Levenshtein Wiki. Levenshtein. (n.d.). Retrieved from the Levenshtein Wiki: http://en.wikipedia.org/wiki/Levenshtein_distance |
171,555,376 | [] | Segmentation automatique d'un texte en rhèses
Constance Nin constance.nin@etu.univ-nantes.fr
LINA
Université de Nantes
2 rue de la HoussinièreBP92208, 44322, Cedex 3Nantes
Victor Pineau victor.pineau@etu.univ-nantes.fr
LINA
Université de Nantes
2 rue de la HoussinièreBP92208, 44322, Cedex 3Nantes
Béatrice Daille beatrice.daille@univ-nantes.fr
LINA
Université de Nantes
2 rue de la HoussinièreBP92208, 44322, Cedex 3Nantes
Solen Quiniou solen.quiniou@univ-nantes.fr
LINA
Université de Nantes
2 rue de la HoussinièreBP92208, 44322, Cedex 3Nantes
Segmentation automatique d'un texte en rhèses
rhesischunksupervised learningdyslexiaannotation guide
La segmentation d'un texte en rhèses, unités-membres signifiantes de la phrase, permet de fournir des adaptations de celui-ci pour faciliter la lecture aux personnes dyslexiques. Dans cet article, nous proposons une méthode d'identification automatique des rhèses basée sur un apprentissage supervisé à partir d'un corpus que nous avons annoté. Nous comparons celle-ci à l'identification manuelle ainsi qu'à l'utilisation d'outils et de concepts proches, tels que la segmentation d'un texte en chunks.ABSTRACTAutomatic segmentation of a text into rhesisThe segmentation of a text into rhesis, parts of a sentence that are meaningful by themselves, allows the creation of tools to help dyslexic people to read. In this paper, we offer an automatic method to identify rhesis based on supervised learning on a corpus we annotated. We then compare it to manual identification as well as to similar tools and concepts, such as chunks identification.MOTS-CLÉS : rhèse, chunk, apprentissage supervisé, dyslexie, guide d'annotation.
Introduction
La démocratisation des livres sur supports numériques permet d'envisager de nouvelles méthodes d'assistance pour les personnes atteintes de troubles de la lecture. Ces difficultés interviennent dans l'apprentissage de la lecture et de l'écriture sans qu'aucun désordre sensoriel (vue, ouïe), intellectuel ou social ne soit responsable. L'Institut National de la Santé Et de la Recherche Médicale (Inserm) identifie au sein de ces troubles la dyspraxie (trouble du développement moteur et de l'écriture), la dyscalculie (trouble des activités numérique), la dysphasie (trouble du langage oral), les troubles de l'attention, et la dyslexie. L'Inserm considère la dyslexie comme un trouble spécifique de la lecture dont la caractéristique essentielle est une altération spécifique et significative de l'acquisition de la lecture (Pull, 1994). La correspondance entre les graphèmes et morphèmes, ainsi que dans le rôle des mots dans la phrase (Ramus et al., 2003) est le problème majeur rencontré par les enfants atteints de ce trouble phonologique (Snowling, 2012). Les maisons d'édition de littérature jeunesse ont su proposer des adaptations pour ces lecteurs. Nous pensons notamment à l'emploi de polices textuelles spécifiques, de marges et espacements plus importants, ou à l'épuration des illustrations pour une meilleure concentration. L'ère du numérique promet non seulement une économie de production, mais surtout une automatisation de ces stratégies, et ce grâce à la tablette, permettant l'achat et la consultation de livres créés spécifiquement pour ce public. Ainsi, les éditeurs ont pu proposer des fonctionnalités pour les lecteurs dyslexiques, faisant cependant généralement abstraction des contraintes typographiques, en proposant par exemple des interfaces audio, de type dictée vocale et lecteurs d'écran (Sitbon et al., 2007). La société Mobidys, quant à elle, souhaite offrir diverses fonctionnalités relatives à la typographie et sa prise en charge dans la lecture numérique sur tablette : l'aération de la mise en page, l'utilisation de polices spécifiques, la coloration syllabique ou phonétique, ou l'accès au dictionnaire. Ces stratégies ont déjà prouvé leur efficacité sur un navigateur (Parilova et al., 2016) et permettent au lecteur dyslexique de mobiliser au maximum sa mémoire à court terme et, donc, ses ressources attentionnelles sur le texte et son sens. Néanmoins, ce travail typographique n'est qu'un des éléments à prendre en compte. La lisibilité, facilité à s'approprier l'information d'un texte, est aussi liée à la taille du texte, sa cohérence, et son découpage (Sitbon et al., 2007). Cet article s'attachera donc à décrire une méthode de découpage automatique de textes de littérature jeunesse, adaptés aux lecteurs dyslexiques, et disponibles en version numérique. Le dégagement du concept de rhèse et son application dans le domaine du TALN s'inscrit dans un projet de partenariat avec l'entreprise Mobidys.
Dans la section 2, nous nous accorderons sur une définition de notre unité de découpage textuel : la rhèse. Nous discuterons ensuite, dans la section 3, de la constitution du corpus de référence. La section 4 détaillera enfin les différentes méthodes utilisées, qui seront évaluées en section 5.
Définitions de la rhèse
Nos recherches ont permis de dégager trois grands domaines contemporains : l'orthophonie, la linguistique et le théâtre. Nous ne nous intéressons ici qu'aux 2 premiers domaines, la définition théâtrale étant liée à des impératifs de diction spécifiques. Tout d'abord, en orthophonie, la rhèse s'emploie pour désigner la quantité de discours prononçable dans un souffle expiratoire (Brin et al., 2011). La rhèse joue alors un rôle tonique, où le schéma vocalique transmet l'émotion voulue. En linguistique, la rhèse est « unité de cadence (...) groupement formé d'ordinaire par un factif ou un substantif accompagnés de leurs compléments les plus proches. On peut dire par exemple : J'ai parlé | au roi, en deux rhèses, ou : J'ai parlé au roi, en une seule rhèse. » (Damourette & Pichon, 1936). Enfin, les travaux de (Cartier, 1978) dans le domaine des TICE introduira notre définition de travail. La rhèse y apparaît comme l' « unité-membre de la phrase, ou petite phrase ayant une signification par elle-même, capable de former une unité de pensée. (Cette dernière) dégagée est purement intuitive et empirique, (sans) aucun critère psycholinguistique précis (...) la segmentation en rhèses (conduirait) à une segmentation perceptive. » (Ehrlich & Tardieu, 1985). La rhèse, dans son acceptation en orthophonie, est à rapprocher du concept d'unité prosodique. Plusieurs travaux visant à identifier automatiquement les frontières de ces unités existent. La plupart effectue cette tâche en analysant en parallèle un texte parlé et sa transcription à l'écrit (Avanzi et al., 2008). Black et Taylor (Black & Taylor, 1998) proposent une méthode d'identification automatique de ces frontières basée uniquement sur des données textuelles. Leur approche, appliquée sur des textes en anglais, est basée sur une méthode d'apprentissage à base de modèles de Markov.
Les chunks (analyse syntaxique de surface) sont également proches des rhèses, puisqu'un chunk est « la plus petite séquence d'unités linguistiques possible formant un groupe avec une tête forte, et qui n'est ni discontinue, ni récursive » (Abney, 1991). Comme les chunks « définissent la structure syntaxique superficielle des phrases » (Constant et al., 2011), nous pourrions considérer chaque chunk comme une rhèse. Cependant, le chunking découpera trop finement les énoncés, notamment sur les mots grammaticaux, ce qui gênera fortement la lecture. Par exemple, le texte La pensée qu'il était temps de chercher le sommeil m'éveillait ; sera découpé en chunks 1 de la façon suivante : La pensée | qu'il | était | temps | de | chercher | le sommeil | m'éveillait | ; tandis que nous souhaiterions obtenir le découpage en rhèses suivant : La pensée | qu'il était temps | de chercher le sommeil | m'éveillait ;. L'entraînement d'un « rhéseur », à partir d'un corpus segmenté en rhèses, est alors nécessaire pour découper un texte en rhèses de la façon souhaitée. C'est ce que nous verrons en section 4. Comme le font remarquer (Damourette & Pichon, 1936), pour un même énoncé 2 , plusieurs segmentations en rhèses sont acceptables. Il convient donc d'observer l'accord moyen entre annotateurs humains, pour cette tâche de segmentation en rhèses, afin de fournir un référentiel pour interpréter les scores obtenus par les méthodes automatiques présentées ci-après. Deux annotateurs familiarisés à la tâche (une étudiante en français-langues étrangères et un étudiant en traitement automatique du langage) ont ainsi manuellement annoté en rhèses chacun des textes et comparé la segmentation obtenue avec celle proposée par les orthophonistes. Pour ce faire, les mesures utilisées sont :
• Le Kappa de Fleiss : nous considérons la tâche comme une tâche de classification des interstices entre les tokens, pouvant être classifiés en tant que frontières de rhèse ou non.
• La F -mesure moyenne : nous considérons la tâche comme une tâche de détection des frontières de rhèses, et comparons les annotations deux à deux en considérant temporairement l'une d'elle comme celle de référence. 1b). L'apprentissage d'un modèle de rhèses nécessite ainsi la mise à disposition d'un corpus manuellement segmenté en rhèses. Cette segmentation manuelle peut être réalisée soit de manière intuitive, soit à l'aide d'un guide d'annotation. C'est ce que nous présentons dans la suite de cette section.
Apprentissage à partir d'un corpus d'entraînement rhésé intuitivement
Le corpus d'entraînement, Emporté par le vent (ouvrage jeunesse de 6 794 mots, également fourni par Mobidys), est annoté en rhèses de façon intuitive par une étudiante linguiste. Cette annotation nous permet ensuite d'apprendre un modèle de segmentation en rhèses à l'aide d'un outil d'apprentissage initialement prévu pour la segmentation en chunks.
Du fait de la quantité restreinte de données d'entraînement, en ayant à disposition un unique corpus d'entraînement, les constructions sémantiques ou locutives sont difficiles à prendre en compte par une approche statistique et détériorent l'apprentissage. Une autre stratégie a alors été envisagée : la création d'un guide d'annotation. Cela permet non seulement d'augmenter la taille du corpus grâce à l'annotation simultanée par plusieurs personnes, mais aussi de faciliter un apprentissage statistique en basant la segmentation en rhèses sur des principes plus formels.
Apprentissage à partir d'un corpus rhésé à l'aide d'un guide d'annotation
Le guide d'annotation, que nous avons élaboré, propose une définition de travail des rhèses, basée sur des règles grammaticales. Le guide d'annotation à été élaboré de façon itérative, en dégageant des règles formelles de l'observation des segmentations intuitives. Ce dernier prescrit aux annotateurs les règles grammaticales de découpage en rhèses, le but étant de respecter un nombre maximal de caractères par ligne égal à 30 (empan visuel maximal souhaité par les experts orthophonistes). Une première étape découpe l'énoncé sur la ponctuation, puis -si l'on excède toujours l'empan maximalsur les propositions, ensuite sur les conjonctions et les prépositions. Si cela ne suffit pas, le guide prescrit le découpage des syntagmes, avant d'en arriver à une segmentation à l'échelle du mot. . Les modèles sont évalués en comparant la segmentation en rhèses produite par l'utilisation de chacun d'eux avec celle résultant du travail des orthophonistes, à l'aide de la F -mesure.
Les scores présentés dans la table 3 indiquent que l'apprentissage d'un modèle sur un corpus annoté spécifiquement en rhèses apporte une nette amélioration de leur identification par rapport à la simple utilisation d'un outil de chunking. Ils révèlent également l'apport de l'utilisation d'un guide d'annotation, qui permet, par la régularité de ses règles, de réduire le bruit lors de l'apprentissage, améliorant encore sensiblement la segmentation. Ces scores peuvent être comparés aux accords inter-annotateurs présentés dans la section 3, utilisant la même F -mesure, et permettant d'évaluer la Table 4: Évaluation qualitative des résultats produits par les différents outils de segmentation L'analyse qualitative des segmentations produites est également intéressante car elle permet de mieux identifier les types d'erreurs les plus courantes ainsi que de juger plus finement de la qualité de l'identification proposée. La table 4 présente des exemples de comparaison des segmentations de référence par rapport aux segmentations obtenues avec les différentes méthodes automatiques. Les rhéseurs appris permettent d'obtenir une granularité moins fine que celle obtenue avec le chunker, ce qui se rapproche de la segmentation de référence souhaitée. Les segmentations produites à l'aide des rhéseurs peuvent cependant être encore trop fine par rapport aux segmentations de référence. Les segmentations non-satisfaisantes pour le rhéseur 2 sont le plus souvent dues à deux causes. La première cause est le mauvais rattachement d'un adverbe lorsque celui-ci est présent entre un syntagme verbal et son objet. Une analyse du cas des adverbes révèle que le rattachement de ceux-ci au syntagme verbal, ou à son objet, dépend généralement de leur catégorie. Par exemple, un adverbe de quantité sera préférentiellement rattaché à l'objet qui le suit L'Homme est | très en colère, tandis qu'un adverbe de temps sera généralement rattaché au syntagme verbal le précédant Cassim se retrouve bientôt | dans la grotte au trésor !. Une solution envisageable pour corriger ces cas est l'utilisation d'un modèle d'étiquetage grammatical différenciant les adverbes selon leurs catégories. La deuxième cause est le découpage d'une locution ou d'une entitée nommée. La correction de ces cas nécessiterait la détection de ces entitées en amont du processus de segmentation.
Conclusion et perspectives
Nous avons présenté dans cet article une méthode d'identification automatique des rhèses. Les différentes approches utilisées montrent la faisabilité de la tâche par une méthode d'apprentissage statistique, ainsi que l'apport d'un guide d'annotation : permettre l'apprentissage d'un rhéseur sur un volume plus important de données plus homogènes. La dernière version de cette méthode aboutit à des résultats satisfaisants, comparables à ceux produits par un expert humain.
Afin d'améliorer l'identification des rhèses, il serait intéressant de prendre en compte les diverses locutions, en les rendant insécables. Ce traitement pourrait être effectué en amont de la segmentation en rhèses, à l'aide d'un dictionnaire dédié. Il est également envisageable de détecter les tournures prenant un caractère locutif par leur répétition dans un texte donné, courantes dans le genre du conte (Le grand méchant loup, Le vilain petit canard), à l'aide d'une méthode de repérage des cooccurrences.
Figure 1 :
1Processus d'apprentissage et de segmentation en rhèses d'un texte
3
Constitution du corpus de référence L'entreprise Mobidys nous a fourni un premier corpus restreint, comprenant deux ouvrages de littérature jeunesse, aux lectorats différents : L'Arbre et le Bûcheron comporte 2 560 mots (titres des chapitres exclus) et est destiné aux enfants de 11 ans et plus ; Ali Baba et les 40 voleurs comporte 1 426 mots (titres des chapitres exclus) et est destiné aux enfants de 8 ans et plus. Ces deux textes sont pré-traités avec un découpage en rhèses réalisé par les experts orthophonistes, tel qu'ils souhaitent le voir fait automatiquement. Ces deux ouvrages constituent notre corpus de référence.
Table 2 :
2Accords inter-annotateurs obtenus pour l'annotation à l'aide du guide d'annotation Les scores d'accord inter-annotateurs présentés dans la table 2 sont calculés avec les mesures détaillées en section 3. Ils sont obtenus pour l'annotation d'un même texte par les deux mêmes annotateurs familiarisés à la tâche que ceux de la table 1 ; ils sont calculés sur le corpus d'entraînement ainsi que sur les deux corpus de référence. Ils sont sensiblement plus élevés que les scores d'accord obtenus lors de la segmentation intuitive en rhèses (voir table 1). Cette amélioration des accords, grâce à l'utilisation d'un guide d'annotation, permet de nuancer les problèmes liés à la faible quantité de données pour une méthode d'apprentissage statistique, en amenant une forte régularité dans la segmentation en rhèses.5 Résultats et discussion
La tokenisation et l'étiquetage grammatical préalables sont effectués à l'aide d'OpenNLP, sur des
modèles entraînés sur le Free French Treebank (Hernandez & Boudin, 2013). L'outil de chunking
utilisé est OpenNLP, basé sur une technologie de modèles de Markov tels qu'utilisés par Schmid et
Atterer (Schmid & Atterer, 2004). Le modèle de chunking utilisé est également appris sur le Free
French Treebank (Hernandez & Boudin, 2013). Les modèles de segmentation d'un texte en rhèses
sont obtenus par apprentissage, respectivement sur un corpus d'entraînement créé à partir d'Emporté
par le vent segmenté intuitivement en rhèses (voir section 4.1), et sur un corpus créé à partir du même
texte mais segmenté à l'aide du guide d'annotation (voir section 4.2)
Table 3 :
3Évaluation des différents outils de segmentation en rhèses sur les corpus de référence Les arbres | sont | roux | et | dorés Les arbres sont roux et dorés Granularité trop fine Les arbres sont roux et dorés Granularité trop fine Cela dura mille | et une nuits ! Cela dura | mille et une nuits ! Locutions et entités nommées non prises en compte L'Homme est très | en colère L'Homme est | très en colère Adverbes souvent mal segmentés lorsqu'ils sont présents entre un syntagme verbal et son objetfaisabilité de la tâche : 0, 87 d'accord sur la segmentation manuelle intuitive en rhèses pour L'Arbre et
le Bûcheron, et 0, 86 pour Ali Baba et les 40 voleurs. Ils peuvent également être comparés aux scores
de F -mesure obtenus par Schmid et Atterer (Schmid & Atterer, 2004) pour une tâche similaire sur de
l'anglais : leur meilleur outil obtient des scores allant de 0,78 à 0,85 selon les corpus d'évaluation.
Segmentation automatique
Segmentation
Observations
avec le chunker
de référence
Segmentation automatique
Segmentation
Observations
avec le rhéseur 1
de référence
Les arbres sont roux | et
dorés
Il regarde autour | de lui
Il regarde autour de lui
Segmentation détériorant la lisibil-
ité
Il reste quelques mûres et de
petites pommes sauvages |
dans les haies
Il reste quelques mûres | et
de petites pommes sauvages
| dans les haies
Granularité pas assez fine
Segmentation automatique
Segmentation
Observations
avec le rhéseur 2
de référence
Ce découpage en chunks est obtenu en utilisant le chunker d'OpenNLP, à l'aide d'un modèle appris sur le Free French Treebank (Hernandez & Boudin, 2013). 2 Un énoncé est défini par (Siouffi & Van Raemdonck, 2012) comme « Fragment de discours, inférieur ou supérieur à la phrase. Réalisation d'une phrase dans une situation déterminée. »
Parsing by chunks. P Abney S, Principle-Based Parsing: Computation and Psycholinguistics. KluwerABNEY S. P. (1991). Parsing by chunks. In Principle-Based Parsing: Computation and Psycholin- guistics, p. 257-278. Kluwer.
Analor, un outil d'aide pour la modélisation de l'interface prosodie-grammaire. M Avanzi, Lacheret A, Victorri B, Actes du colloque CERLICO. s du colloque CERLICOAVANZI M., LACHERET A. & VICTORRI B. (2008). Analor, un outil d'aide pour la modélisation de l'interface prosodie-grammaire. In Actes du colloque CERLICO, p. 27-46.
Assigning phrase breaks from part-of-speech sequences. Black A, Taylor P, Computer Speech and Language. 122BLACK A. & TAYLOR P. (1998). Assigning phrase breaks from part-of-speech sequences. Computer Speech and Language, 12(2), 99-117.
Dictionnaire d'orthophonie. Brin F, C Courrier, Lderlé E. & Masy V, Ortho EditionBRIN F., COURRIER C., LDERLÉ E. & MASY V. (2011). Dictionnaire d'orthophonie. Ortho Edition.
Le caractère de l'édition de texte. Cartier M, Non publiéCARTIER M. (1978). Le caractère de l'édition de texte. Non publié.
. M Constant, Tellier I, D Duchier, Dupont D, Sigogne A, Billot S, CONSTANT M., TELLIER I., DUCHIER D., DUPONT D., SIGOGNE A. & BILLOT S. (2011).
Intégrer des connaissances linguistiques dans un crf : application à l'apprentissage d'un segmenteurétiqueteur du français. Actes de la conférence TALN. s de la conférence TALNIntégrer des connaissances linguistiques dans un crf : application à l'apprentissage d'un segmenteur- étiqueteur du français. In Actes de la conférence TALN.
Des mots à la pensée, essai de grammaire de la langue française. Damourette J Pichon E, Collection des linguistes contemporains, chapter VII. CNRSÉDITIONS D'ARTHREYDAMOURETTE J. & PICHON E. (1936). Des mots à la pensée, essai de grammaire de la langue française. In ÉDITIONS D'ARTHREY, Ed., Collection des linguistes contemporains, chapter VII. CNRS.
Lire, comprendre, mémoriser les textes sur écran vidéo. Ehrlich M.-F. & Tardieu H , Communication et langages. 651EHRLICH M.-F. & TARDIEU H. (1985). Lire, comprendre, mémoriser les textes sur écran vidéo. Communication et langages, 65(1), 91-106.
Construction automatique d'un large corpus libre annoté morpho-syntaxiquement en français. Hernandez N, Boudin F, Actes de la conférence TALN-RECITAL. s de la conférence TALN-RECITALHERNANDEZ N. & BOUDIN F. (2013). Construction automatique d'un large corpus libre annoté morpho-syntaxiquement en français. In Actes de la conférence TALN-RECITAL.
Emerging technology enabling dyslexia users to read and perceive written text correctly. Parilova T, F Mrvan, Mizik B. & Hldka E, Actes de la conférence CiCLING. s de la conférence CiCLINGPARILOVA T., MRVAN F., MIZIK B. & HLDKA E. (2016). Emerging technology enabling dyslexia users to read and perceive written text correctly. In Actes de la conférence CiCLING.
Troubles spécifiques du développement des acquisitions scolaires. Classification internationale des maladies : dixième révision. C Pull, PULL C. (1994). Troubles spécifiques du développement des acquisitions scolaires. Classification internationale des maladies : dixième révision. O.M.S.
Theories of developmental dyslexia : insights from a multiple case study of dyslexic adults. Ramus F, Rosen S, S Dakin, Day B, J Castelotte, White S. & Frith U, Brain. 4126RAMUS F., ROSEN S., DAKIN S., DAY B., CASTELOTTE J., WHITE S. & FRITH U. (2003). Theories of developmental dyslexia : insights from a multiple case study of dyslexic adults. Brain, 4(126), 841-865.
New statistical methods for phrase break prediction. Schmid H. & Atterer M, Actes de la conférence COLING. s de la conférence COLINGSCHMID H. & ATTERER M. (2004). New statistical methods for phrase break prediction. In Actes de la conférence COLING.
100 fiches pour comprendre la linguistique, 4ème édition. Siouffi G, Van Raemdonck D, BréalSIOUFFI G. & VAN RAEMDONCK D. (2012). 100 fiches pour comprendre la linguistique, 4ème édition. Bréal.
Éléments pour adapter les systèmes de recherche d'information aux dyslexiques. Sitbon L, P & Bellot, Blache P, Revue TAL. 483SITBON L., BELLOT P. & BLACHE P. (2007). Éléments pour adapter les systèmes de recherche d'information aux dyslexiques. Revue TAL, 48(3), 1-26.
Early identification and interventions for dyslexia : a contemporary view. Snowling M, Journal of Research in Special Educational Needs (JORSEN). 131SNOWLING M. (2012). Early identification and interventions for dyslexia : a contemporary view. Journal of Research in Special Educational Needs (JORSEN), 13(1), 7-14. |
||
31,826,507 | ECNU at SemEval-2017 Task 1: Leverage Kernel-based Traditional NLP features and Neural Networks to Build a Universal Model for Multilingual and Cross-lingual Semantic Textual Similarity | To model semantic similarity for multilingual and cross-lingual sentence pairs, we first translate foreign languages into English, and then build an efficient monolingual English system with multiple NLP features. Our system is further supported by deep learning models and our best run achieves the mean Pearson correlation 73.16% in primary track. | [
216848261,
14068874,
11252815,
15784336,
16579379,
4421747,
12233462,
13896151,
1957433,
18720573
] | ECNU at SemEval-2017 Task 1: Leverage Kernel-based Traditional NLP features and Neural Networks to Build a Universal Model for Multilingual and Cross-lingual Semantic Textual Similarity
Association for Computational LinguisticsCopyright Association for Computational LinguisticsAugust 3 -4, 2017. 2017
Junfeng Tian jftian@stu.ecnu.edu.cn
Department of Computer Science and Technology
East China Normal University
ShanghaiP.R.China
Zhiheng Zhou zhzhou@stu.ecnu.edu.cn
Department of Computer Science and Technology
East China Normal University
ShanghaiP.R.China
Man Lan mlan@cs.ecnu.edu.cn
Department of Computer Science and Technology
East China Normal University
ShanghaiP.R.China
Shanghai Key Laboratory of Multidimensional Information Processing
Yuanbin Wu ybwu@cs.ecnu.edu.cn
Department of Computer Science and Technology
East China Normal University
ShanghaiP.R.China
Shanghai Key Laboratory of Multidimensional Information Processing
ECNU at SemEval-2017 Task 1: Leverage Kernel-based Traditional NLP features and Neural Networks to Build a Universal Model for Multilingual and Cross-lingual Semantic Textual Similarity
Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017)
the 11th International Workshop on Semantic Evaluations (SemEval-2017)Vancouver, CanadaAssociation for Computational LinguisticsAugust 3 -4, 2017. 2017
To model semantic similarity for multilingual and cross-lingual sentence pairs, we first translate foreign languages into English, and then build an efficient monolingual English system with multiple NLP features. Our system is further supported by deep learning models and our best run achieves the mean Pearson correlation 73.16% in primary track.
Introduction
Sentence semantic similarity is the building block of natural language understanding. Previous Semantic Textual Similarity (STS) tasks in SemEval focused on judging sentence pairs in English and achieved great success. In SemEval-2017 STS shared task concentrates on the evaluation of sentence semantic similarity in multilingual and cross-lingual (Agirre et al., 2017). There are two challenges in modeling multilingual and crosslingual sentence similarity. On the one hand, this task requires human linguistic expertise to design specific features due to the different characteristics of languages. On the other hand, lack of enough training data for a particular language would lead to a poor performance.
The SemEval-2017 STS shared task assesses the ability of participant systems to estimate the degree of semantic similarity between monolingual and cross-lingual sentences in Arabic, English and Spanish, which is organized into a set of six secondary sub-tracks (Track 1 to Track 6) and a single combined primary track (Primary Track) achieved by submitting results for all of the secondary sub-tracks. Specifically, track 1, 3 and 5 are to determine STS scores for monolingual sentence pairs in Arabic, Spain and English, respectively. Track 2, 4, and 6 involve estimat-ing STS scores for cross-lingual sentence pairs from the combination of two particular languages, i.e., Arabic-English, Spanish-English and surprise language (here is Turkish)-English cross-lingual pairs. Given two sentences, a continuous valued similarity score on a scale from 0 to 5 is returned, with 0 indicating that the semantics of the sentences are completely independent and 5 signifying semantic equivalence. The system is assessed by computing the Pearson correlation between system returned semantic similarity scores and human judgements.
To address this task, we first translate all sentences into English through the state-of-the-art machine translation (MT) system, i.e., Google Translator 1 . Then we adopt a combination method to build a universal model to estimate semantic similarity, which consists of traditional natural language processing (NLP) methods and deep learning methods. For traditional NLP methods, we design multiple effective NLP features to depict the semantic matching degree and then supervised machine learning-based regressors are trained to make prediction. For neural networks methods, we first obtain distributed representations for each sentence in sentence pairs and then feed these representations into end-to-end neural networks to output similarity scores. Finally, the scores returned by the regressors with traditional NLP methods and by the neural network models are equally averaged to get a final score to estimate semantic similarity.
2 System Description Figure 1 shows the overall architecture of our system, which consists of the following three modules: Figure 1: The system architecture Traditional NLP Module is to extracts two kinds of NLP features. The sentence pair matching features are to directly calculate the similarity of two sentences from several aspects and the single sentence features are to first represent each sentence in NLP method and then to adopt kernelbased method to calculate the similarity of two sentences. All these NLP-based similarity scores act as features to build regressors to make prediction.
Deep Learning Module is to encode input sentence pairs into distributed vector representations and then to train end-to-end neural networks to obtain similarity scores.
Ensemble Module is to equally average the above two modules to get a final score.
Next, we will describe the system in detail.
Traditional NLP Module
In this section, we give the details of feature engineering and learning algorithms.
Sentence Pair Matching Features
Five types of sentence pair matching features are designed to directly calculate the similarity of two sentences based on the overlaps of character/word/sequence, syntactic structure, alignment and even MT metrics.
N-gram Overlaps: Let S i be the sets of consecutive n-grams, and the n-gram overlap (denoted as ngo) is defined as (Šarić et al., 2012):
ngo(S 1 , S 2 ) = 2 · ( |S 1 | |S 1 ∩ S 2 | + |S 2 | |S 1 ∩ S 2 | ) −1
We obtain n-grams at three different levels (i.e., the original and lemmatized word, the character level), where n = {1, 2, 3} are used for word level and n = {2, 3, 4, 5} are used for character level. Finally, we collect 10 features. Sequence Features: Sequence features are designed to capture more enhanced sequence information besides the n-gram overlaps. We compute the longest common prefix / suffix / substring / sequence and levenshtein distance for each sentence pair. Note that the stopwords are removed and each word is lemmatized so as to estimate sequence similarity more accurately. As a result, we get 5 features.
Syntactic Parse Features: In order to model tree structured similarity between two sentences rather than sequence-based similarity, inspired by Moschitti (2006), we adopt tree kernels to calculate the similarity between two syntactic parse trees. In particular, we calculate the number of common substructures in three different kernel spaces, i.e., subtree (ST), subset tree (SST), partial tree (PT). Thus we get 3 features.
Alignment Features: Sultan et al. (2015) used word aligner to align matching words across a pair of sentences, and then computes the proportion of aligned words as follows:
sim(S 1 , S 2 ) = n a (S 1 ) + n a (S 2 ) n(S 1 ) + n(S 2 )
where n a (S) and n(S) is the number of aligned and non-repeated words in sentence S. To assign appropriate weights to different words, we adopt two weighting methods: i) weighted by five POS tags (i.e., noun, verb, adjective, adverb and others; we first group words in two sentences into 5 POS categories, then for each POS category we compute the proportion of aligned words, and we get 5 features as a result. ii) weighted by IDF values (calculated in each dataset separately). Totally, we collect 7 alignment features.
MT based Features: Following previous work in (Zhao et al., 2014) and (Zhao et al., 2015), we use MT evaluation metrics to measure the semantic equivalence of the given sentence pairs. Nine MT metrics (i.e., BLEU, GTM-3, NIST, -WER, -PER, Ol, -TERbase, METEOR-ex, ROUGE-L) are used to assess the similarity. These 9 MT based features are calculated using the Asiya Open Toolkit 2 .
Finally, we collect a total of 34 sentence pair matching features.
Single Sentence Features
Unlike above sentence pair matching features to directly estimate matching score between two sentences, the single sentence features are to represent each sentence in the same vector space to calculate the sentence similarity. We design the following three types of features.
BOW Features: Each sentence is represented as a Bag-of-Words (BOW) and each word (i.e., dimension) is weighted by its IDF value.
Dependency Features: For each sentence, its dependency tree is interpreted as a set of triples, i.e., (dependency-label, governor, subordinate). Similar to BOW, we treat triples as words and represent each sentence as Bag-of-Triples.
Word Embedding Features: Each sentence is represented by concatenating min/max/average pooling of vector representations of words. Note that for each word, its vector is weighted by its IDF value. Table 1 lists four the state-of-the-art pretrained word embeddings used in this work.
Embedding
Dimension Source word2vec Mikolov et al. (2013) 300d GoogleNews-vectors-negative300.bin GloVe Pennington et al. (2014) 100d
glove.6B.100d.txt 300d glove.6B.300d.txt paragram Wieting et al. (2015) 300d paragram 300 sl999.txt Table 1: Four pretrained word embeddings However, in comparison with the number of sentence pair matching features (33 features), the dimensionality of single sentence features is huge (approximately more than 71K features) and thus it would suppress the discriminating power of sentence pair matching features. Therefore, In order to reduce the high dimensionality of single sentence features, for each single sentence feature, we use 11 kernel functions to calculate sentence pair similarities. Table 2 lists the 11 kernel functions we used in this work. In total we collect 33 sin- Finally, these 67 NLP features are standardized into [0, 1] using max-min normalization before building regressor models.
Regression Algorithms
Five learning algorithms for regression are explored, i.e., Random Forests (RF), Gradient Boosting (GB) Support Vector Machines (SVM), Stochastic Gradient Descent (SGD) and XGBoost (XGB). Specially, the first four algorithms are implemented in scikit-learn toolkit 3 , and XGB is implemented in xgboost 4 . In preliminary experiments, SVM and SGD underperformed the other three algorithms and thus we adopt RF, GB and XGB in following experiments.
Deep Learning Module
Unlike above method adopting manually designed NLP features, deep learning based models are to calculate semantic similarity score with the pretrained word vectors as inputs. Four pretrained word embeddings listed in Table 1 are explored and the paragram embeddings achieved better results in preliminary experiments. We analyze and find the possible reason may be that the paragram embeddings are trained on Paraphrase Database 5 , which is an extensive semantic resource that consists of many phrase pairs. Therefore, we use paragram embeddings to initialize word vectors.
Based on pretrained word vectors, we adopt the following four methods to obtain single sentence vector as (Wieting et al., 2015):
(1) by simply averaging the word vectors in single sentence;
(2) after (1), the resulting averaged vector is multiplied by a projection matrix;
(3) by using deep averaging network (DAN, Iyyer et al. (2015)) consisting of multiple layers as well as nonlinear activation functions;
(4) by using long short-term memory network (LSTM, Hochreiter and Schmidhuber (1997)) to capture long-distance dependencies information.
In order to obtain the vector of sentence pair, given two single sentence vectors, we first use a element-wise subtraction and a multiplication and then concatenate the two values as the final vector of sentence pair representation. At last, we use a fully-connected neural network and output the probability of similarity based on a softmax function. Thus we obtain 4 deep learning based scores.
To learn model parameters, we minimize the KL-divergence between the outputs and gold labels, as in Tai et al. (2015). We adopt Adam (Kingma and Ba, 2014) as optimization method and set learning rate of 0.01.
Ensemble Module
The NLP-based scores and the deep learning based scores are averaged in the ensemble module to obtain the final score.
Experimental Settings
Datasets: SemEval-2017 provided 7 tracks in monolingual and cross-lingual language pairs. We first translate all sentences into English via Google Translator and then we build a universal model on only English pairs. The training set we used is all the monolingual English dataset from SemEval STS task (2012-2015) consisting of 13, 592 sentence pairs.
For each track, we grant the training datasets provided by SemEval-2017 as development set. Almost all test data is from SNLI, except for Track 4b from WMT. This can explain why on Track 4b SP-EN-WMT, the performance is very poor. So we perform 10 − f old cross validation (CV) on Track 4b SP-EN-WMT.
Preprocessing: All sentences are translated into English via Google Translator. The Stanford CoreNLP is used for tokenization, lemmatization, POS tagging and dependency parsing.
Evaluation: For Track 1 to Track 6, Pearson correlation coefficient is used to evaluate each individual test set. For Primary Track, since it is achieved by submitting results of all the secondary sub-tracks, a macro-averaged weighted sum of all correlations on sub-tracks is used for evaluation.
Results on Training Data
A series of comparison experiments on English STS 2016 training set have been performed to explore different features and algorithms. Table 4 lists the results of different NLP features with GB learning algorithm. We find that: (1) the simple BOW Features with kernel functions are effective for sentence semantic similarity. (2) The combination of all these NLP features achieved the best results, which indicates that all features make contributions. Therefore we do not perform feature selection and use all these NLP features in following experiments. Table 5 lists the results of different algorithms using all NLP features as well as deep learning scores. We find:
Comparison of NLP Features
Comparison of Learning Algorithms
(1) Regarding machine learning algorithms, RF and GB achieve better results than XGB. GB performs the best on 3 and RF performs the best on 2 of 5 datasets.
(2) Regarding deep learning models, DL-word and DL-proj outperform the other 2 non-linear models on all the 5 datasets. This result is consistent with the findings in (Wieting et al., 2015):"In outof-domain scenarios, simple architectures such as word averaging vastly outperform LSTMs." (3) All ensemble methods significantly improved the performance. The ensemble of 3 machine learning algorithms (RF+GB+XGB) outperforms any single learning algorithm. Similarly, the ensemble of the 4 deep learning models (DL-all) promotes the performance to 75.28%, which is sig- nificantly better than single model and is comparable to the result using expert knowledge. Furthermore, the ensemble of 3 machine learning algorithms and 4 deep learning models by averaging these 7 scores (EN-seven), achieves the best results on all of the development set in English STS 2016. It suggests that the traditional NLP methods and the deep learning models are complementary to each other and their combination achieves the best performance.
Results on Cross-lingual Data
To address cross-lingual, we first translate crosslingual pairs into monolingual pairs and then adopt the universal model to estimate semantic similarity. Thus, language translation is critical to the performance. The first straightforward way for translation (Strategy 1) is to translate foreign language into English. We observe that it is more likely to produce synonyms when using Strategy 1. For example: one English-Spanish pair The respite was short. La tregua fue breve. is translated into English-English pair, The respite was short.
The respite was brief.
where short and brief are synonyms produced by MT rather than their actual literal meaning expressed in original languages. Reminding that one MT system may be in favour of certain words and it also can translate English into foreign language. Thus we propose Strategy 2 for translation, i.e., we first translate the English sentence into foreign target language and then roll back to English via MT again. Under Strategy 2, the above example English-Spanish pair is translated into the same English sentence:
The respite was brief. Table 6 compares the results of the two strategies on cross-lingual data. It is clear that Strategy 2 achieves better performance, which indicates that the semantic difference between synonyms in cross-lingual pairs resulting from MT are different from that in monolingual pairs.
Results on Spanish-English WMT
On Spanish-English WMT dataset, the system performance dropped dramatically. The possible reason may lie in that they are from different domains. Therefore, we use 10-fold cross validation on this dataset for evaluation. Table 7 list the results on Spanish-English WMT, where the last column (wmt(CV)) of show that using the in-domain dataset achieves better performance. Take a closer look at this dataset, we find that several original Spanish sentences are meaningless. For example, the English-Spanish pair His rheumy eyes began to cloud. A sus ojos rheumy comenzóa nube. has a score of 1 as the second is not a proper Spanish sentence. Since there are many meaningless Spanish sentences in this dataset sourced from MT evaluation, we speculate that these meaningless sentences are made to be used as negative training samples for MT model. Thus, only on this dataset, we grant Spanish as target language and translate English sentences into Spanish sentences. After that, we use 9 MT evaluation metrics (mentioned in Section 2.1) to generate MT based Features. Then these 9 MT metrics are averaged as the similarity score (MT(es) 3 ). Table 7: Pearson correlations on Spanish-English WMT. MT(es) 3 is calculated using their translated Spanish-Spanish form. We did not perform cross validation in deep learning models and did not ensemble them due to time constraint.
From Table 7, we see that the MT(es) 3 score alone achieves 0.2858 on wmt in terms of Pearson correlation, which even surpasses the best performance (0.2677) of ensemble model. Based on this, we also combine the ensemble model with MT(es) 3 and their averaged score achieves 0.3789 in terms of Pearson correlation.
System Configuration
Based on the above results, we configure three following systems: Run 1: all features using RF algorithms. (RF) Run 2: all features using GB algorithms. (GB) Run 3: ensemble of three algorithms and four deep learning scores. (EN-seven) Particularly, we train Track 4b SP-EN-WMT using the wmt dataset provided in SemEval-2017 and Run 2 and Run 3 on this track are combined with MT(es) 3 features. Table 8 lists the results of our submitted runs on test datasets. We find that: (1) GB achieves slightly better performance than RF, which is consistent to that in training data; (2) the ensemble model significantly improves the performance on all datasets and enhance the performance of Primary Track by about 3% in terms of Pearson coefficient; (3) on Track 4b SP-EN-WMT, combining with MT(es) 3 significantly improves the performance.
Results on Test Data
The last three rows list the results of two top systems and one baseline system provided by organizer. The baseline is to use the cosine similarity of one-hot vector representations of sentence pairs. On all language pairs, our ensemble system achieves the best performance. This indicates that both the traditional NLP methods and the deep learning methods make contribution to performance improvement.
Conclusion
To address mono-lingual and cross-lingual sentence semantic similarity evaluation, we build a universal model in combination of traditional NLP methods and deep learning methods together and the extensive experimental results show that this combination not only improves the performance but also increases the robustness for modeling similarity of multilingual sentences. Our future work will concentrate on learning reliable sentence pair representations in deep learning.
linear kernel polynomial, rbf, laplacian, sigmoidType
Measures
linear kernel
Cosine distance, Manhanttan distance,
Euclidean distance, Chebyshev distance
stat kernel
Pearson coefficient, Spearman coefficient,
Kendall tau coefficient
non-
Table 2 :
2List of 11 kernel functions gle sentence features, which is of the same order of magnitude as sentence pair matching features.
Table 3
3lists the statistics of the development and the test data for each track in SemEval-2017.Track
Language Pair
Development
Test
Pairs
Dataset
Pairs
Track 1
Arabic-Arabic
(AR-AR)
1088
MSRpar, MSRvid,
SMTeuroparl (2017)
250
Track 2
Arabic-English
(AR-EN)
2176
MSRpar, MSRvid,
SMTeuroparl (2017)
250
Track 3
Spanish-Spanish
(SP-SP)
1555
News, Wiki
(2014, 2015)
250
Track 4a
Spanish-English
(SP-EN)
595
News, Multi-source
(2016)
250
Track 4b
Spanish-English
WMT news data
(SP-EN-WMT)
1000 WMT (2017)
250
Track 5
English-English
(EN-EN)
1186
Plagiarsism, Postediting,
Ans.-Ans., Quest.-Quest.,
HDL (2016)
250
Track 6
English-Turkish
(EN-TR)
-
-
500
Table 3 :
3The statistics of development and test set.
Table 4 :
4Feature comparison on English STS 2016, the last three are top three systems in STS 2016
English STS 2016
Algorithm
Postediting Ques.-Ques.
HDL
Plagiarism Ans.-Ans.
Weighted
mean
Single
Model
RF
0.8394
0.6858
0.7966
0.8259
0.5882
0.7518
GB
0.8357
0.6967
0.7964
0.8293
0.6306
0.7618
XGB
0.7917
0.6237
0.7879
0.8175
0.6190
0.7333
DL-word
0.8097
0.6635
0.7839
0.8003
0.5614
0.7283
DL-proj
0.7983
0.6584
0.7910
0.7892
0.5573
0.7234
DL-dan
0.7695
0.4200
0.7411
0.6876
0.4756
0.6274
DL-lstm
0.7864
0.5895
0.7584
0.7783
0.5182
0.6921
Ensemble
RF+GB+XGB
0.8298
0.6969
0.8086
0.8313
0.6234
0.7622
DL-all
0.8308
0.6817
0.8160
0.8261
0.5854
0.7528
EN-seven
0.8513
0.7077
0.8288
0.8515
0.6647
0.7851
Table 5 :
5Algorithms comparison on English STS 2016 datasets
Table 6 :
6Pearson correlations on Cross-lingual
STS 2016, the last row is the top system in 2016.
Table 8 :
8The results of our three runs on STS 2017 test datasets.
https://cloud.google.com/translate/
http://asiya.cs.upc.edu/demo/asiya_ online.php
http://scikit-learn.org/stable/ 4 https://github.com/dmlc/xgboost 5 http://www.cis.upenn.edu/˜ccb/ppdb/
AcknowledgmentsThis research is supported by grants from Science and Technology Commission of Shanghai Municipality (14DZ2260800 and 15ZR1410700), Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213) and NSFC (61402175).
Mayonlp at semeval-2016 task 1: Semantic textual similarity based on lexical semantic net and deep learning semantic model. Naveed Afzal, Yanshan Wang, Hongfang Liu, Proceedings of SemEval. SemEvalSan Diego, CaliforniaNaveed Afzal, Yanshan Wang, and Hongfang Liu. 2016. Mayonlp at semeval-2016 task 1: Semantic textual similarity based on lexical semantic net and deep learning semantic model. In Proceedings of SemEval 2016. San Diego, California.
Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. Eneko Agirre, Daniel Cer, Mona Diab, Lopez-Gazpio, Specia Inigo, Lucia, Proceedings of SemEval. SemEvalEneko Agirre, Daniel Cer, Mona Diab, Lopez-Gazpio Inigo, and Specia Lucia. 2017. Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation. In Proceedings of SemEval 2017.
Uwb at semeval-2016 task 1: Semantic textual similarity using lexical, syntactic, and semantic information. Tomáš Brychcín, Lukáš Svoboda, Proceedings of SemEval. SemEvalSan DiegoCaliforniaTomáš Brychcín and Lukáš Svoboda. 2016. Uwb at semeval-2016 task 1: Semantic textual similarity us- ing lexical, syntactic, and semantic information. In Proceedings of SemEval 2016. San Diego, Califor- nia, pages 588-594.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9(8):1735-1780.
Deep unordered composition rivals syntactic methods for text classification. Mohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, Hal Daumé, Iii , Proceedings of ACL. ACLMohit Iyyer, Varun Manjunatha, Jordan Boyd-Graber, and Hal Daumé III. 2015. Deep unordered compo- sition rivals syntactic methods for text classification. In Proceedings of ACL 2015.
Adam: A method for stochastic optimization. P Diederik, Jimmy Kingma, Ba, ab- s/1412.6980Diederik P. Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. CoRR ab- s/1412.6980.
The Stanford CoreNLP natural language processing toolkit. Christopher D Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J Bethard, David Mc-Closky, Proceedings of ACL 2014. ACL 2014Christopher D. Manning, Mihai Surdeanu, John Bauer, Jenny Finkel, Steven J. Bethard, and David Mc- Closky. 2014. The Stanford CoreNLP natural lan- guage processing toolkit. In Proceedings of ACL 2014. pages 55-60.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, NIPS 2013. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Cor- rado, and Jeff Dean. 2013. Distributed representa- tions of words and phrases and their compositional- ity. In NIPS 2013. pages 3111-3119.
Efficient convolution kernels for dependency and constituent syntactic trees. Alessandro Moschitti, SpringerAlessandro Moschitti. 2006. Efficient convolution ker- nels for dependency and constituent syntactic trees. In ECML 2006. Springer, pages 318-329.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of EMNLP 2014. EMNLP 2014Jeffrey Pennington, Richard Socher, and Christo- pher D. Manning. 2014. Glove: Global vectors for word representation. In Proceedings of EMNLP 2014. pages 1532-1543.
Samsung poland nlp team at semeval-2016 task 1: Necessity for diversity; combining recursive autoencoders, wordnet and ensemble methods to measure semantic similarity. Barbara Rychalska, Katarzyna Pakulska, Krystyna Chodorowska, Wojciech Walczak, Piotr Andruszkiewicz, Proceedings of SemEval. SemEvalSan Diego, CaliforniaBarbara Rychalska, Katarzyna Pakulska, Krystyna Chodorowska, Wojciech Walczak, and Piotr An- druszkiewicz. 2016. Samsung poland nlp team at semeval-2016 task 1: Necessity for diversity; com- bining recursive autoencoders, wordnet and ensem- ble methods to measure semantic similarity. In Pro- ceedings of SemEval 2016. San Diego, California, pages 602-608.
Dls@cu: Sentence similarity from word alignment and semantic vector composition. Steven Md Arafat Sultan, Tamara Bethard, Sumner, Proceedings of SemEval. SemEvalDenver, ColoradoMd Arafat Sultan, Steven Bethard, and Tamara Sum- ner. 2015. Dls@cu: Sentence similarity from word alignment and semantic vector composition. In Pro- ceedings of SemEval 2015. Denver, Colorado.
Improved semantic representations from tree-structured long short-term memory networks. Kai Sheng Tai, Richard Socher, Christopher D Manning, arXiv:1503.00075arXiv preprintKai Sheng Tai, Richard Socher, and Christopher D Manning. 2015. Improved semantic representation- s from tree-structured long short-term memory net- works. arXiv preprint arXiv:1503.00075 .
Takelab: Systems for measuring semantic text similarity. Goran Franešarić, Mladen Glavaš, Janšnajder Karan, Bojana Dalbelo Bašić, Proceedings of SemEval. SemEvalMontréal, CanadaFraneŠarić, Goran Glavaš, Mladen Karan, JanŠnajder, and Bojana Dalbelo Bašić. 2012. Takelab: System- s for measuring semantic text similarity. In Pro- ceedings of SemEval 2012. Montréal, Canada, pages 441-448.
Towards universal paraphrastic sentence embeddings. John Wieting, Mohit Bansal, Kevin Gimpel, Karen Livescu, arX- iv:1511.08198arXiv preprintJohn Wieting, Mohit Bansal, Kevin Gimpel, and Karen Livescu. 2015. Towards universal para- phrastic sentence embeddings. arXiv preprint arX- iv:1511.08198 .
Ecnu: Using traditional similarity measurements and word embedding for semantic textual similarity estimation. Jiang Zhao, Man Lan, Jun Feng Tian, Proceedings of SemEval. SemEvalDenver, ColoradoJiang Zhao, Man Lan, and Jun Feng Tian. 2015. Ec- nu: Using traditional similarity measurements and word embedding for semantic textual similarity es- timation. In Proceedings of SemEval 2015. Denver, Colorado.
Ecnu: One stone two birds: Ensemble of heterogeneous measures for semantic relatedness and textual entailment. Jiang Zhao, Tiantian Zhu, Man Lan, Proceedings of SemEval. SemEvalDublin, IrelandJiang Zhao, Tiantian Zhu, and Man Lan. 2014. Ecnu: One stone two birds: Ensemble of heterogeneous measures for semantic relatedness and textual entail- ment. In Proceedings of SemEval 2014. Dublin, Ire- land. |
117,656 | Oki Electric Industry : Description of the Oki System as Used for MUC-7 | INTRODUCTIONThis paper describes the Oki Information Extraction system as used for MUC-7 evaluation [1][2]. The tasks we have conducted are Named Entity, Co-reference, Template Element and Template Relation. Each module is implemented using MT system modules and pattern recognition modules. Our purposes to participate MUC-7 evaluation are to evaluate how MT system modules are eective for other application such as IE system and to develop our information extraction technology based on pattern recognition.Oki's MT system, PENS EE [3][4], is a commercial system and is one of major MT systems in Japan. Translation mechanism of PENS EE system basically applies transfer method. There are English-to-Japanese (EJ) and Japanese-to-English (JE) systems and both are based on the same software systems, that is, both MT systems are using the same Grammar Description Language (GDL), and GDL rule translator and interpreter (GDL system). Oki has already spent more than ten years for development and improvement of the MT system. The Oki IE system used for MUC-7 is composed of a surface pattern recognition module and a structural pattern recognition module. The surface pattern recognition module traces a text at surface linguistic level and detects NE elements and co-referred elements without any language analysis system such as lexical analysis and syntax analysis. The structural pattern recognition module traces parse trees of a text, which is generated by parser of MT system. Syntactic and semantic information embedded in the parse tree are used to detect NE elements, co-referred elements and so on. The structural pattern recognition rules are described in GDL and are executed on the GDL system which is based on pattern matching mechanism of tree data. Detected elements of tree data are marked and are extracted after execution of rules. | [] | Oki Electric Industry : Description of the Oki System as Used for MUC-7
J Fukumoto ffukumoto@kansai.oki.co.jp
Kansai Lab
R&D group Oki Electric Industry Co., Ltd. Crystal Tower
1-2-27 Shiromi, Chuo-ku540-6025OsakaJAPAN
F Masui masui@kansai.oki.co.jp
Kansai Lab
R&D group Oki Electric Industry Co., Ltd. Crystal Tower
1-2-27 Shiromi, Chuo-ku540-6025OsakaJAPAN
M Shimohata simohata@kansai.oki.co.jp
Kansai Lab
R&D group Oki Electric Industry Co., Ltd. Crystal Tower
1-2-27 Shiromi, Chuo-ku540-6025OsakaJAPAN
M Sasaki sasakig@kansai.oki.co.jp
Kansai Lab
R&D group Oki Electric Industry Co., Ltd. Crystal Tower
1-2-27 Shiromi, Chuo-ku540-6025OsakaJAPAN
Oki Electric Industry : Description of the Oki System as Used for MUC-7
INTRODUCTIONThis paper describes the Oki Information Extraction system as used for MUC-7 evaluation [1][2]. The tasks we have conducted are Named Entity, Co-reference, Template Element and Template Relation. Each module is implemented using MT system modules and pattern recognition modules. Our purposes to participate MUC-7 evaluation are to evaluate how MT system modules are eective for other application such as IE system and to develop our information extraction technology based on pattern recognition.Oki's MT system, PENS EE [3][4], is a commercial system and is one of major MT systems in Japan. Translation mechanism of PENS EE system basically applies transfer method. There are English-to-Japanese (EJ) and Japanese-to-English (JE) systems and both are based on the same software systems, that is, both MT systems are using the same Grammar Description Language (GDL), and GDL rule translator and interpreter (GDL system). Oki has already spent more than ten years for development and improvement of the MT system. The Oki IE system used for MUC-7 is composed of a surface pattern recognition module and a structural pattern recognition module. The surface pattern recognition module traces a text at surface linguistic level and detects NE elements and co-referred elements without any language analysis system such as lexical analysis and syntax analysis. The structural pattern recognition module traces parse trees of a text, which is generated by parser of MT system. Syntactic and semantic information embedded in the parse tree are used to detect NE elements, co-referred elements and so on. The structural pattern recognition rules are described in GDL and are executed on the GDL system which is based on pattern matching mechanism of tree data. Detected elements of tree data are marked and are extracted after execution of rules.
INTRODUCTION
This paper describes the Oki Information Extraction system as used for MUC-7 evaluation [1] [2]. The tasks we have conducted are Named Entity, Co-reference, Template Element and Template Relation. Each module is implemented using MT system modules and pattern recognition modules. Our purposes to participate MUC-7 evaluation are to evaluate how MT system modules are eective for other application such as IE system and to develop our information extraction technology based on pattern recognition.
Oki's MT system, PENS EE [3] [4], is a commercial system and is one of major MT systems in Japan. Translation mechanism of PENS EE system basically applies transfer method. There are English-to-Japanese (EJ) and Japanese-to-English (JE) systems and both are based on the same software systems, that is, both MT systems are using the same Grammar Description Language (GDL), and GDL rule translator and interpreter (GDL system). Oki has already spent more than ten years for development and improvement of the MT system. The Oki IE system used for MUC-7 is composed of a surface pattern recognition module and a structural pattern recognition module. The surface pattern recognition module traces a text at surface linguistic level and detects NE elements and co-referred elements without any language analysis system such as lexical analysis and syntax analysis. The structural pattern recognition module traces parse trees of a text, which is generated by parser of MT system. Syntactic and semantic information embedded in the parse tree are used to detect NE elements, co-referred elements and so on. The structural pattern recognition rules are described in GDL and are executed on the GDL system which is based on pattern matching mechanism of tree data. Detected elements of tree data are marked and are extracted after execution of rules.
BACKGROUND
Oki has submitted two systems: the English system for MUC-7 (NE, CO, TE and TR) and the Japanese system for MET-2 (NE). This is our rst participation in MUC and MET evaluation. In order to develop the systems in a short period, we utilized parsing module of the MT system for a sentence analyzer. The English system is developed using the English sentence analyzer of the PENS EE-EJ and the Japanese system is developed using the Japanese sentence analyzer of PENS EE-JE. We also utilized the GDL system for description of extraction rules (only for description of structural patterns). The GDL system has powerful rule debugging environment, which is very helpful to develop an extraction system in a short period.
OKI MT SYSTEM : PENS EE
In the MT system, PENS EE, translation rules are executed on the GDL system. A GDL rule consists of pattern matching part and action part. Pattern matching part describes conditions to specify a part of tree data and action part states changes of the specied tree structure and/or modication of node information of the tree. Sample transformation of tree data is shown in Figure 1. Pattern matching species a sequence of article (art), adjective (adj) and noun and both sides of the sequence are arbitrary number of nodes (\*1" and \*2"). After transformation, node information of article is embedded into the noun node, and the noun and adjective nodes are moved under the noun phrase node (NP). In our current implementation, a lower node modied its upper node.
OVERVIEW OF THE OKI IE SYSTEM
Oki's IE system consists of surface pattern recognition modules, structural pattern recognition modules, ltering programs to convert internal expression of parser to surface expression, and SGML tag processing modules. Architecture of the system is shown in Figure 2. The system rstly recognizes surface level patterns in a text and adds SGML tags for each task. In this analysis, NE elements, CO elements and sub-category of NE elements for the TE task are extracted. In the SGML tag processing module, these tags and original SGML tags are embedded in their adjacent words for morphological and syntax analysis. Each sentence of the tag-processed text is parsed in the morphological and syntax analyzer which is originally used in the MT system. Structural pattern analysis rules for the NE task are implemented in the syntax analysis rules. The recognized patterns are expressed in node attributes in a parse tree. In order to obtain NE results, information of node attributes is extracted and embedded in a text as SGML tags by the NE lter.
SGML
For the CO task, all parse trees of sentences are converted into one tree structure each child node of which is a parse tree of a sentence. In recognition of the CO task, anaphoric elements are extracted and their antecedents are detected by traversing the tree structure. Information of CO elements is extracted from the tree structure and embedded in the original text as SGML tags by the CO lter. CO-reference numbers are also set by the CO lter. For the TE and TR tasks, a text tree structure is used for identifying TE and TR elements. The TE and TR lters generate TE and TR information in BNF format from the tree structure of a text.
The NE system
The NE system consists of a surface pattern recognition module, a structural pattern recognition module and some ltering programs for tag-processing for parsing and post-processing.
Surface Pattern Recognition
Surface pattern recognition is processed in the following sub-processes.
Recognition of capitalized area Sequential capitalized words are recognized as NE candidates.
Head word processing A head word of a sentence is removed from NE candidates when it is registered in Non-element word list.
Merging capitalized area Some functional words and prepositions, \for" and \of", are utilized for recognition of NE candidates. For example, the functional word \Bank" and the preposition \of" are used for recognition of the NE element, \Bank of Tokyo". The functional word \University" and the preposition \of" are used for recognition of the NE element, \University o f T okyo".
Type recognition of NE elements Type information of NE candidates is recognized by functional words such as \Mr." of a person name, \Bank" of a organization name, \City" of a location name and so on.
Dictionary look up
The rests of NE candidates are checked with word list of each NE type. The lists are manually extracted from newspaper articles, index information of a world map and company names from stock market news, etc.
In the recognition of NE elements, identied elements are utilized for recognition of their abbreviation which is repeated in a text. For example, when \Mr. John Doe" in a text is recognized as a person name, the words \John" and \Doe" in the same text are also recognized as a person name. Moreover, the abbreviation \FAA" in a text is used for checking NE candidates after recognition of \Federal Aviation Administration" in the same text.
Tag-processing for Parsing
An original text is SGML-tagged one and NE elements in the text are also SGML-tagged after the surface pattern recognition. In order to parse such a SGML-tagged text, these tags have t o b e concealed. In a parse tree, tag information is expressed in node attributes of a parse tree, therefore, the system can handles information obtained at a surface level pattern recognition during parsing.
Structural Pattern Recognition
After parsing, several structural pattern rules are applied to a parse tree in order to recognize NE elements. Some of them are as follows:
The subject element of some types of verb such as \say", \die", \play" and so on is recognized as a person name.
The noun phrase in front of relative pronoun \who" is recognized as a person name.
The noun phrase, whose appositive phrase is a person name, is recognized as a person name, vice versa.
The noun phrase followed by \employee", \spokesman" and so on is recognized as an organization name.
The noun phrase with preposition \in", \at", \near" or \over", whose appositive phrase is an organization name, is recognized as an organization name. Post-processing After the structural pattern recognition, NE tag information is extracted from a parse tree and added to the pattern tagged text. All the NE tag information is utilized for tagging to non-body part of the text.
The CO System
The CO system also consists of a surface pattern recognition module, a structural pattern recognition module and some ltering programs for tag-processing for parsing and post-processing.
Surface Level Recognition
In the surface pattern recognition of the NE system, abbreviation and repeated one are utilized for recognition of NE elements. This mechanism is also used for surface level recognition of CO elements. For example, when the person name \Mr. John Doe" appears in a text, the words \John" and \Doe" are utilized as abbreviation of the person name.
Tag-processing for Parsing SGML-tags and CO tags obtained from the surface pattern analysis are concealed for parsing as well as the NE system.
Structural Level Recognition
This is the main module of CO-reference analysis. Firstly, the CO system recognizes co-reference expression of appositions and the expression of \A is B". Then, the CO system extracts anaphoric expressions of pronouns and noun phrases with denite articles, and traverses the tree structure from bottom to top and left to right way to detect its antecedent. Post-processing Information of CO tags is extracted from the tree structure of a text and is added to the pattern tagged text. All the CO tag information is utilized for tagging to non-body part of a text in the same way of the NE system. Moreover, all the co-referred elements have the same co-reference number, therefore, renumbering according to the CO task denition is also done in this module.
The TE System
The TE system consists of the following sub-processes.
Recognition of Entities
Entities which are recognized in the NE system are selected. Capitalized elements which are not identied in the NE system will be handled as candidates for TE elements. These elements will be identied as TE elements, if they are related to some descriptor.
Recognition of Descriptors
Noun phrases and prepositional phrases which have functional words of descriptors such as \Comdr.", \President", \agency" and so on are extracted. Semantic information of Descriptors which is used in parsing will be utilized for recognition of type information of TE entities in the next process.
Recognition Relation between Entities and Descriptor
Relation between entities and descriptors is recognized by structural pattern rules for the TE task. For example, an entity and a descriptor are related on some verb such as \called", \named" and \be", they are recognized as a TE pattern.
Merging TE Patterns
Some TE patterns are merged using Co-reference information, that is, the same entities (coreferred elements) are recognized as one entity.
The TR System
The TR system identies relation between TE elements. In our current simple implementation, if a person and an organization are related on some predicate, they are recognized as having \employeeof" relation. Moreover, if an organization and a location are related on some predicate, they are recognized as having \locate-of" relation. In case of an artifact and an organization, they will have \product-of" relation.
WORKTHROUGH ARTICLE
Table shows results of the workthrough article. As for the TE task, we mainly implemented rules for entity, therefore, the TE score for entity is at an average level according to the results of MUC-6. However, the score for location is very low. It is because the TE system has a few location rules and has no dictionaries to identify the detailed information of a location.
CONCLUSION AND FUTURE DIRECTION
This participation to the MUC-7 and MET-2 tasks was our rst trial and we had to develop our IE systems for the tasks in a short period. Our purpose to participate MUC-7 tasks is to evaluate how MT parsing modules are eective for development of an IE system in a short period and how they work. We found that it is useful to utilize the MT system modules for the other application such as IE system because our MT system has practically achieved robustness and the rule description system called the GDL system is also very helpful due to its pattern matching mechanism. However, parsing modules of the MT system has been originally developed for language transformation in a transfer method, therefore, tree structure of an original language is sometimes converted to the structure of a target language. These kinds of rules caused some diculty of pattern matching in the IE tasks.
Figure 1 :
1Transformation of tree data in the GDL system
Figure 2 :
2Architecture of the Oki MUC system
Tag ProcessorsSurface Pattern
Recognition Modules
(NE, CO and TE)
Morphological and
Syntax Analyzer
+
Structural Pattern
Recognition Modules
Co-reference
Analysis Module
Template Element
Analysis Module
Template Relation
Analysis Module
NE Results
NE Filter
CO Results
CO Filter
TE Results
TE Filter
TR Results
TR Filter
Input Text
Table 1: Results of Workthrough ArticleTask
Recall
Precision
F-measure
NE (nyt9602140704) 94/132 71.1 94/114 82.5
76.4
TE (nyt9602140509) 52/186 30.0 52/62 83.9
41.9
TR (nyt9602140509) 45/168 26.8 45/62 72.6
39.1
CO (nyt9609100378) 26/79 32.9 26/47 55.3
41.3
We have participated the MET-2 task in Japanese which has only the NE task. We are planning to apply our IE technology developed for MUC-7 tasks to a practical Japanese information extraction system. Moreover, it will be useful to participate Japanese CO, TE, TR and ST tasks if they will be dened in the next conference.
. I I Tipster Text Program Phrase, Darpa, TIPSTER TEXT PROGRAM Phrase II, DARPA, (1996).
Proceedings of 6th Message Understanding Conference (MUC-6). 6th Message Understanding Conference (MUC-6)Proceedings of 6th Message Understanding Conference (MUC-6), DARPA, (1995).
F Masui, T Tsunashima, T Sugio, T Tazoe, T Shiino, \Analysis of Lengthy Sentences Using an English Comparative Structure Model. System and Computers in JapanMasui, F., Tsunashima, T., Sugio, T., Tazoe, T. and Shiino, T.: \Analysis of Lengthy Sen- tences Using an English Comparative Structure Model", System and Computers in Japan, pp.40{48, SCRIPTA TECHNICA Inc., (1996).
. Pens Ee, PENS EE, http://www.oki.co.jp/OKI/RDG/English/kikaku/vol.1/sugio/main.html http://www.oki.co.jp/OKI/Home/English/Topic/PENSEE/ |
18,256,594 | Selecting Translation Strategies in MT using Automatic Named Entity Recognition | We report on the results of an experiment aimed at enabling a machine translation system to select the appropriate strategy for dealing with words and phrases which have different translations depending on whether they are used as proper names or common nouns in the source text. We used the ANNIE named entity recognition system to identify named entities in the source text and pass them to MT systems in the form of "do-not-translate" lists. A consistent gain of about 20% in translation accuracy was achieved for all tested systems. The results suggest that successful translation strategy selection is dependent on accurate segmentation and disambiguation of the source text -aspects which could be significantly improved by named entity recognition. We further suggest an automatic method for distinguishing and lexical differences in MT output that could have applications in automated MT evaluation for morphologically rich languages. | [
52826864
] | Selecting Translation Strategies in MT using Automatic Named Entity Recognition
Bogdan Babych
Centre for Translation Studies
University of Leeds
UK
Department of Computer Science
University of Sheffield
UK
Anthony Hartley a.hartley@leeds.ac.uk]
Centre for Translation Studies
University of Leeds
UK
Selecting Translation Strategies in MT using Automatic Named Entity Recognition
18
We report on the results of an experiment aimed at enabling a machine translation system to select the appropriate strategy for dealing with words and phrases which have different translations depending on whether they are used as proper names or common nouns in the source text. We used the ANNIE named entity recognition system to identify named entities in the source text and pass them to MT systems in the form of "do-not-translate" lists. A consistent gain of about 20% in translation accuracy was achieved for all tested systems. The results suggest that successful translation strategy selection is dependent on accurate segmentation and disambiguation of the source text -aspects which could be significantly improved by named entity recognition. We further suggest an automatic method for distinguishing and lexical differences in MT output that could have applications in automated MT evaluation for morphologically rich languages.
Introduction
Language communities develop certain acceptable practices and norms for translating different types of concepts, expressions and texts from other languages and cultures. These practices are described as translation methods, translation strategies and translation procedures. Darbelnet, 1958, 1995). Translation methods relate to whole texts, while strategies and (finer-grained) procedures relate to sentences and smaller units (Newmark, 1988:81). The choice of a translation strategy often depends on the type of a translated unit. For example, for certain types of proper names the optimal translation strategy is transference, i.e., a "donot-translate" or "transliterate" strategy, while the majority of common nouns are translated with other strategies: literal translation, transposition, modulation, etc. (Newmark, 1988: 81-88). This implies that recognising different types of units in the source text is a necessary condition for optimising the choice of translation strategy and, ultimately, for improving the quality of the target text.
The problem of selecting translation strategies for words that may be used as proper names or common nouns in the source language is related to a more general problem of word sense disambiguation (WSD) -one of the most serious problems for Machine Translation technology. Dealing with "proper vs common disambiguation" (PCD) often requires combining different knowledge sources, in a similar way to WSD (Stevenson and Wilks, 2001). But the cross-level nature of this problem also suggests that improvement in MT quality could be achieved through improving related aspects of the source-text analysis, such as Named Entity (NE) recognition Somers, 2003:524). For the purposes of this discussion, we assimilate proper nouns to NEs and investigate NE recognition as a possible solution to the PCD problem insofar as it might enable the selection of the correct strategy.
Accurate NE recognition is important for the general quality of MT for the following reasons: 1. The translation of the same token may be different depending on whether the token is a common noun or part of an NE, e.g. in Russian if a common name is a part of an organization name, a "do-not-translate" or "transliterate" strategy should be used instead of a default translation strategy: In this case, NE recognition affects mainly morpho-syntactic segmentation, but individual words normally have correct translation strategies. However, a different morpho-syntactic context often requires the selection of a different translation strategy (either within or outside NEs), which may cause PCD errors in MT output, so there is an indirect link between morpho-syntactic disambiguation and PCD e.g.:
(3) Original: Moody's Investors Service Inc. placed the long-term debt under review. MT output: Инвесторы Муди Обслуживают компанию, поместил долгосрочный долг под обзором.
('Investors of Moody serve the company, he placed the long-term debt under review'). Here the NE Investors Service Inc. is not treated as a single segment, which causes a combined morpho-syntactic and PCD error: the system translates the word service as a verb that means 'to serve' instead of using the correct "do-nottranslate" strategy.
Thus NE recognition could be beneficial both for morpho-syntactic well-formedness and for correct PCD in MT output. In we addressed the first of these two problems. In this paper, we concentrate on the second problem and show how PCD can be improved using existing NE recognition modules.
Certain types of NEs, such as organisation names, appear to be a weak point even for some leading-edge MT systems, such as Systran and Reverso. At the same time, the problem of accurate NE recognition has been specifically addressed and benchmarked by the developers of information extraction (IE) systems. For example, the NE recognition module of the ANNIE IE system achieves a combined Precision & Recall score of 80-90% on news texts (Cunningham et al., 2002). Our suggestion is that combining this highly accurate NE recognition module with state-of-the-art MT systems would be beneficial for MT output, even if we do not change any of the other MT components.
The source code for commercial MT systems is not publicly available, so for our experiment we used one of the pre-processing tools of these systems -"do-not-translate" (DNT) lists. These lists were created from NE annotation produced by the ANNIE NE recognition module. For each of the three available MT systems we generated two different translations: a baseline translation and the DNT-processed translation. We made an approximate distinction between PCD and morpho-syntactic differences automatically using statistical frequency weights similar to tf.idf scores. We evaluated the improvement in PCD by manually annotating the PCD differences in the baseline and NE-processed MT output.
The remainder of the paper is organised as follows: in section 2 we discuss the rationale of our automated method for distinguishing lexical and morpho-syntactic differences in MT output. In section 3 we describe the linguistic resources and scoring procedure used in the experiment. In section 4 we present the PCD improvement achieved for three MT systems. Section 5 points out possible applications of the work to automatic MT evaluation. In section 6 we discuss conclusions and future work.
Distinguishing lexical and morphosyntactic differences in MT output
DNT-processing causes both morpho-syntactic and lexical differences in compared translations. In example (4) we annotate lexical (L) and morpho-syntactic (M) differences in the reference and DNT-processed translations. These differences are due to the fact that the company name "Eastern (Airlines)" received a correct morpho-syntactic category as a result of DNTprocessing (Noun, not Adjective). Moreover, not translating this company name is the correct option for Russian target text. In this example, all six variants in the DNTprocessed translation are better than their counterparts in the baseline translation. Note that a correct PCD choice for lexical differences is determined by the senses of the words in the source text, and there is no way of correctly using lexical items from the baseline translation as alternative translations. In contrast, the source text does not require particular values of morpho-syntactic categories in the target text. These values are determined by the rules of the target language and by the morpho-syntactic structure of a sentence, chosen by a translator. In many cases these values can be subject to greater variation then the lexical choices. For example, there is a legitimate way of using the last two words in the Table 1 in the genitive and accusative case, as in the baseline translation shown in example (5), if these values are required by their morpho-syntactic position:
Original Baseline DNT-proc. L Eastern Восточный ('Eastern (ADJ) ') Eastern (not translated) L Current потока (stream (NOUN) ') текущих ('current (ADJ) ') L Contract заключают ('conclude (VERB) ') контракта ('contract (NOUN) ') M Moved перемещенный (PARTICIPLE) переместил (VERB) M Cost стоимости (GEN) стоимостью (INST) M Agreements соглашения (ACC) соглашений (GEN)
(5) Предлагая дату встречи, Eastern переместился на один шаг ближе к тому, чтобы повторно открыть текущие контрактные соглашения (ACC) высокой стоимости (GEN) .
('By proposing a meeting date, Eastern moved one step closer toward that [situation], to reopen current agreements (ACC) of high cost (GEN) ) A rough distinction between morpho-syntactic and lexical differences in the compared output texts can be drawn automatically using term frequency weights proposed in (Babych, Hartley, Atwell, 2003) for evaluating MT for Information Extraction purposes. These weights (S-scores) are similar to tf.idf scores: they describe the relative salience of terms in a particular text. They were found to make an accurate distinction between content and function words. With a varying degree of accuracy (depending on how analytic the grammar of a given language is) this distinction also separates lexical and morphosyntactic differences in compared texts. For Russian (which has a not highly analytic grammar) it achieves 88.4% Precision for lexical items, while for French the Precision is 98%.
The S-scores are computed for each word in each text using the following formula:
( ) ) ( ) ( )] ( ) , ( / ) ( log ) , ( i corp i i doc corp j i doc P N df N P P j i S − × − = − where: -P doc(i,j)
is the relative frequency of the word in the text; ("Relative frequency" is the number of tokens of this word-type divided by the total number of tokens). -P corp-doc(i) is the relative frequency of the same word in the rest of the corpus, without this text; -P corp(i) is the relative frequency of the word in the whole corpus, including this particular text. -df i is the number of documents in the corpus where the word w i occurs (the document frequency); -N is the total number of documents in the corpus;
We computed S-scores for words with:
(P doc(i,j) -P corp-doc(i) ) > 0; AbsFrq i > 1,
where AbsFrq i is the number of occurrences of the word w i in the corpus. Table 2 illustrates the ranking of words according to their S-score for one of the English texts from MUC6 NE corpus, for which tf i,j > 1 (tf i,j is the number of occurrences of the word w i in the document d j ). 1,880 concern 42 -of Table 2. Ranking of words by the S-score
We established by experiment that a reasonable threshold for distinguishing content words and functional words is:
S-score = 1 This threshold gives good results for text in all analysed languages: English, French and Russian. Our assumption implies that for comparing lexical differences in two variants of translation we need to compare for each text sets of words with an S-score above the threshold.
Accordingly, all words that were different in each set were automatically highlighted in their respective texts and presented for manual scoring.
In the examples of MT in the following sections, words with tf i,j > 1 are bold, words with tf i,j = 1 are bold and italic. In the original English sentences, the NEs used for the DNT lists are highlighted in bold.
Resources and scoring method
For our experiment we used the following linguistic resources: 30 texts (news articles) which were processed with the NE recognition module of the GATE-1 IE system in the DARPA MUC6 competition. The results of manual NE annotation were also available, but GATE NE recognition is sufficiently accurate for these texts (Recall -84%, Precision -94 %, Precision and Recall -89.06% (Gaizauskas et al, 1995)) that errors in the GATE output will not have had a major impact on our results. Table 3 summarises the statistical parameters of the corpus analysed. The corpus is rich in NEs, so the effect of NE recognition on PCD could be accurately measured for the MT systems. Premium' v3.0b, released in 2000 Two translations were generated by each MT system: − a baseline translation without a DNT list − a DNT-processed translation with the automatically created DNT list of organisation names The baseline and the DNT-processed translation were automatically compared using the method presented in Section 2. Lexical differences were highlighted and scored according to the following criterion: +1 -PCD is correct in the DNT-processed translation and is wrong in the baseline translation 0 -PCD in both translations is equally (not) correct -1 -PCD is wrong in the DNT-processed translation, or DNT-processing is not acceptable translation strategy for the NE; PCD is correct in the baseline translation Further examples illustrate these scores: All differences highlighted in the whole MUC-6 NE corpus were manually annotated for each of the MT systems under consideration. Cases of morpho-syntactic differences were also annotated and excluded from the scored set of differences. The number of annotated differences is presented in Table 4: Table 4 The larger number of differences and the lower Precision for the Russian system can be attributed to the largely synthetic morphology of Russian.
Number of:
ProMT
The overall score for improvement / decline in PCD for each MT system was calculated as a sum of all scores of lexical differences divided by the number of lexical differences for the particular system.
Results of the experiment for PCD
The set-up of this experiment gives a reasonable estimate of the influence of NE recognition on MT quality, and suggests that if improvement in MT can be achieved via pre-processing tools, then we can expect even greater improvement when an NE recognition module is properly integrated into MT systems (e.g., types of NEs requiring non-transference translation strategies are also distinguished). The improvement achieved for the MT systems under consideration was around 20%.
The results of manual annotation are summarised in Table 5 Gain +17.1% +20.2% +23.6% Table 5 Scoring results
All systems showed consistent improvement in PCD tasks after NE recognition. The results indicate that systematic NE recognition has great potential for improving the quality of MT, and that successful PCD depends on appropriate analysis of other aspects in the source text, such as determining correct values for morphological categories and correct syntactic segmentation. These aspects could be substantially improved via NE recognition.
However, finding appropriate segmentation and morpho-syntactic disambiguation is a necessary but not a sufficient condition for achieving improvement in MT: most cases of decline in MT quality after DNT-processing are due to the lack of flexibility in determining the optimal translation strategy for NEs. In our experiment, the overall improvement in the quality of PCD is due to the fact that the transference ("do-not-translate") strategy is optimal, or it is an acceptable translation strategy for the majority of NE that occurred in our corpus (Newmark, 1982). But many NEs might need to be translated by specific translation equivalents that are normally recognised by the state-of-the-art MT systems. This is especially important for names of well-known organisations, such as 'The Treasury', 'The Army', 'The Navy' 'Labour', which are often part of more complex NEs: 'The Treasury Secretary', 'The Labour Government', 'The Army Chief' -in all these cases a "do-not-translate" strategy could cause a serious decline in MT quality.
Our analysis suggests that targeting specific needs of MT could be a way of improving MT quality with IE technology: the NE recognition stage could meet the needs of MT systems by distinguishing different classes of NEs which require different translation strategies. Appropriate annotation of these NEs in the source text could then guide the MT system at the transfer stage.
Conclusions and future work
We have characterised the potential improvement in PCD for MT systems achievable with accurate NE recognition. The results indicate that PCD is very sensitive to those aspects of MT quality which can be improved with NE recognition: finding appropriate morpho-syntactic categories and correct segmentation for NEs often influences the correctness of the general analysis of the source sentence. But some aspects of PCD cannot be improved with existing NE recognition and need to be addressed by the IE and MT communities jointly. NE recognition modules can be extended to distinguish between types of NEs that require different translation strategies; and MT systems can be adapted to deal more flexibly with user input, by using NE annotation designed specifically for MT purposes.
The proposed method of making a rough automatic distinction between lexical and morpho-syntactic differences allowed us to annotate important features in a relatively large corpus within a reasonable amount of time. We suggest that this method could have applications in other domains of NLP, in particular -in automated MT evaluation and in automatic alignment of parallel texts.
Application to automatic MT evaluation
Current automatic evaluation methods, such as BLEU (Papineni et al., 2001), do not make a distinction between lexical and morpho-syntactic differences, but distinguishing them and controlling the quality of MT on several separate levels might be useful to for the evaluation of MT systems under development (especially for target languages with a rich morphology, where these two types of differences clearly characterise different aspects of quality).
Another important problem for further research is establishing whether different degrees of legitimate variation in translation are allowed for items with different tf.idf and S-scores. One of the most serious problems for the BLEU method is related to legitimate variability in the reference translation. In order not to penalise acceptable MT that is different from human translation, the metric uses several reference translations of the same text. These resources can be expensive to create. However, if terms with different significance scores show different levels of legitimate variation, then the metric could rely on potentially more stable terms, so fewer reference texts would be needed to produce consistent evaluation scores for MT systems.
Yet another problem for the BLEU metric is high data scarcity of N-grams in languages with complex synthetic morphology, such as Slavonic languages. In order to achieve evaluation scores comparable with scores for English or other analytical languages, we need to use much larger reference corpora of human translations. An alternative solution to this problem could be to make automatically a rough distinction between lexical and morphological differences and to concentrate on the lexical differences that are expected to be less sparse across human translations and MT output.
Application to automatic alignment of parallel texts
An analysis of S-scores (Section 2) of lexical differences in the compared translations also gives interesting results. It can be noted that words which are translations of the same word in the DNT-processed and the baseline target texts have very close scores. Ranked lists of differences for Russian MT are presented in Table 6: Table 6 Scores for corresponding words
DNT-processed translation
The match between S-scores is closer for words with a unique translation, which implies that they have similar distribution in the text and in the corpus.
Another interesting property of the statistical significance measure is that different word forms which are translations of the same word (e.g., an English NE) often have very close S-scores, which are also close to the score of the original word. For example, S-scores for the first word in the NE "Pan Am" and for three morphological variants of its wrong translation into Russian are presented in Table 7. All are variants of the lexeme "кастрюля" -'saucepan', and also have different frequencies in the text. This effect is also the strongest for words which have a unique translation in the corpus. This property of the S-score may be useful in MT evaluation for highly inflected languages.
Future work in this direction will involve measuring the accuracy of the suggested method of distinguishing morpho-syntactic and lexical differences in MT output for typologically different languages and evaluating the degree of legitimate variation in translation at different levels of the significance scores.
By proposing a meeting date, Eastern moved one step closer toward reopening current high-cost contract agreements Baseline translation: Предлагая дату встречи, Восточный-(L) перемещенный-(M) один шаг ближе к повторному открытию высокой стоимости-(M) потока-(L) заключают-(L) соглашения-(M) ('By proposing a meeting date, Eastern (Adj.) moved (Participle) one step closer toward reopening the high-cost (ACC) of a current (Noun: 'the stream [of water, etc.]') (they) conclude (Verb) agreements (ACC) ') DNT-processed translation: Предлагая дату встречи, Eastern-(L) переместил-(M) один шаг ближе к повторному открытию текущих-(L) соглашений-(M) контракта-(L) с высокой стоимостью-(M) ('By proposing a meeting date, Eastern (Noun) moved (Verb) one step closer toward reopening of current (Adj.) agreements (GEN) of a contract (Noun) with high cost (INST) ')
… the Los Angeles office of the group of the hay [i.e., the grass, cut and dried for fodder], management consulting firm ') Human translation: Лос-Анджелесский офис Hay Group, управленческой консультантской фирмы. In this case NE recognition is directly linked to the PCD problem: we need to disambiguate between "common" and "NE" readings of the same string.Original: …the Los Angeles office of the
Hay Group, a management consulting
firm.
MT output 1 : …Лос-Анджелесский офис
Группы Сена, управление
консультантская фирма.
('2. Failure to recognise NEs as single syntactic
units or to determine their correct morpho-
syntactic category in the source text may
cause segmentation errors, which lead to the
wrong morpho-syntactic structure in the target
text, e.g.:
(2) Original: a Big Board spokesman
couldn't comment on the talks.
MT output: Большой представитель
Правления не мог комментировать
переговоры.
('A big spokesman of the Board
[management] couldn't comment on the
talks').
Table 1 .
1Examples of translation differences
Table 3 :
3Statistical parameters of the corpusDNT lists were automatically generated from
GATE annotations and the texts were translated
with three commercial MT systems:
-English-Russian 'ProMT 98' v4.0, released
in 1998
-English-French 'ProMT', (Reverso) v5.01,
released in 2001
-English-French 'Systran Professional
:ProMT 1998
E-R
ProMT 2001
E-F
Systran 2000
E-F
Mark
N
Score
N
Score
N
Score
+1* 154
+154
62
+62
77
+77
0* 239
0
66
0
61
0
-1*
74
-74
30
-30
36
-36
∑ 467
+ 80
158
+ 32
174
+ 41
Baseline translation1:NBC:3.939817 1:ЭН-БИ-СИ:3.906120 1:Техники:3.416626 technicians (NOM.PLUR) 1:Техников:3.382496 (of) technicians (GEN.PLUR) 1:Electric:3.416626 1:Электрическая:3.382496 electric (NOM.SING.FEM)1:Broadcast:3.416626
1:Радиопередачи:3.382496
of broadcast (GEN.SING)
2:Служащие:2.959119
employees (NOM.PLUR)
2:Служащих:2.924432
of employees (GEN.PLUR)
2:General:2.959119
2:Общая:2.924432
general (NOM.PLUR.FEM)
3:Association:1.886203
3:Ассоциации:2.303370
of association (GEN.SING)
DNT-NE / S-score Abs. frq. in DNT text / in the rest of corpusBaseline transl.
of NE
Abs. frq.
in baseline
text / in the
rest of corp.
Pan
3.087052
14 / 0
Кастрюля (NOM)
3.112597
8 / 0
Кастрюлю (ACC)
3.112597
2 / 0
Кастрюли (GEN)
3.112597
2 / 0
Table 7
7Scoring results
The examples are taken from the output of MT systems that translated 30 texts of MUC-6 data, which was originally used for evaluating NE recognition.
Statistical Modelling of MT output corpora for Information Extraction. B Babych, A Hartley, E Atwell, Proceedings of the Corpus Linguistics 2003 conference. Dawn Archer, Paul Rayson, Andrew Wilson and Tony McEnery. Lancaster Universitythe Corpus Linguistics 2003 conferenceUKBabych, B., A. Hartley and E. Atwell. 2003. Statistical Modelling of MT output corpora for Information Extraction. In: Proceedings of the Corpus Linguistics 2003 conference, edited by Dawn Archer, Paul Rayson, Andrew Wilson and Tony McEnery. Lancaster University (UK), 28 -31 March 2003. Pp. 62-70.
Improving Machine Translation Quality with Automatic Named Entity Recognition. B Babych, A Hartley, Proceedings of the 7 th International EAMT workshop on MT and other language technology tools. Improving MT throught other language technology tools. Recourses and tools for building MT. the 7 th International EAMT workshop on MT and other language technology tools. Improving MT throught other language technology tools. Recourses and tools for building MTBudapest, HungaryBabych, B. and A. Hartley. 2003. Improving Machine Translation Quality with Automatic Named Entity Recognition. In Proceedings of the 7 th International EAMT workshop on MT and other language technology tools. Improving MT throught other language technology tools. Recourses and tools for building MT. Budapest, Hungary. p. 1-8.
GATE: A Framework and Graphical Development Environment for robust NLP Tools and Applications. H Cunningham, D Maynard, K Bontcheva, V Tablan, Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL'02). the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL'02)PhiladelphiaCunningham, H., D. Maynard, K. Bontcheva, V. Tablan. 2002. GATE: A Framework and Graphical Development Environment for robust NLP Tools and Applications. Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics (ACL'02). Philadelphia, July 2002.
. R Gaizauskas, T Wakao, K Humphreys, Gaizauskas, R., T. Wakao, K. Humphreys,
University of Sheffield: Description of the LaSIE system as used for MUC-6. H Cunningham, Y Wilks, Proceedings of the 6th Message Understanding Conference (MUC-6). the 6th Message Understanding Conference (MUC-6)Morgan KaufmannH. Cunningham, Y. Wilks. 1995. University of Sheffield: Description of the LaSIE system as used for MUC-6. Proceedings of the 6th Message Understanding Conference (MUC-6). Morgan Kaufmann, pp. 207-220.
Approaches to translation. P Newmark, Pergamon PressOxford, NYNewmark, P. 1982. Approaches to translation. Pergamon Press, Oxford, NY.
A textbook of translation. P Newmark, LongmanLondon, NYNewmark, P. 1988. A textbook of translation. Longman, London, NY.
Bleu: a method for automatic evaluation of machine translation. K Papineni, S Roukos, T Ward, W-J Zhu, W0109-022IBM research report RC22176. Papineni, K., S. Roukos, T. Ward, and W-J Zhu. 2001. Bleu: a method for automatic evaluation of machine translation. IBM research report RC22176 (W0109-022) September 17, 2001.
Machine Translation: latest developments. H Somers, The Oxford handbook on Computational Linguistics. Ed. By Ruslan Mitkov. Oxford, NYOxford University PressSomers, H. 2003. Machine Translation: latest developments. In: The Oxford handbook on Computational Linguistics. Ed. By Ruslan Mitkov. Oxford University Press, Oxford, NY. -Pp. 512- 528.
The integration of knowledge sources in word sence disambiguation. M Stevenson, Y Wilks, Computational Linguistics. 273Stevenson, M. and Y. Wilks. 2001. The integration of knowledge sources in word sence disambiguation. Computational Linguistics 27(3):321-349.
Sylistique comparée du français et de l'anglais: Methode de traduction. J P Vinay, J Darbelnet, DidierParisVinay, J.P. and J.Darbelnet. 1958. Sylistique comparée du français et de l'anglais: Methode de traduction. Didier, Paris.
Comparative stylistics of French and English : a methodology for translation. J P Vinay, J Darbelnet, J. Benjamins Pub. Juan C. Sager, M.-J. HamelVinay, J.P. and J.Darbelnet. 1995. Comparative stylistics of French and English : a methodology for translation / translated and edited by Juan C. Sager, M.-J. Hamel. J. Benjamins Pub., Amsterdam, Philadelphia. |
7,562,019 | HRI tk: The Human-Robot I nteraction ToolKit Rapid Development of Speech-Centric I nteractive Systems in ROS | Developing interactive robots is an extremely challenging task which requires a broad range of expertise across diverse disciplines, including, robotic planning, spoken language understanding, belief tracking and action management. While there has been a boom in recent years in the development of reusable components for robotic systems within common architectures, such as the Robot Operating System (ROS), little emphasis has been placed on developing components for Human-Robot-Interaction. In this paper we introduce HRItk (the Human-Robot-Interaction toolkit), a framework, consisting of messaging protocols, core-components, and development tools for rapidly building speech-centric interactive systems within the ROS environment. The proposed toolkit was specifically designed for extensibility, ease of use, and rapid development, allowing developers to quickly incorporate speech interaction into existing projects. | [] | HRI tk: The Human-Robot I nteraction ToolKit Rapid Development of Speech-Centric I nteractive Systems in ROS
2012. June 7
I An Lane
Carnegie Mellon University
NASA Ames Research Park
Moffett FieldCaliforniaUSA
Vinay Prasad
Carnegie Mellon University
NASA Ames Research Park
Moffett FieldCaliforniaUSA
Gaurav Sinha
Carnegie Mellon University
NASA Ames Research Park
Moffett FieldCaliforniaUSA
Arlette Umuhoza
Carnegie Mellon University
NASA Ames Research Park
Moffett FieldCaliforniaUSA
Shangyu Luo
Carnegie Mellon University
NASA Ames Research Park
Moffett FieldCaliforniaUSA
Akshay Chandrashekaran
Carnegie Mellon University
NASA Ames Research Park
Moffett FieldCaliforniaUSA
Antoine Raux araux@honda-ri.com
Honda Research Institute
Mountain ViewCaliforniaUSA
HRI tk: The Human-Robot I nteraction ToolKit Rapid Development of Speech-Centric I nteractive Systems in ROS
Workshop on Future directions and needs in the Spoken Dialog Community: Tools and Data
Montréal, Canada2012. June 7
Developing interactive robots is an extremely challenging task which requires a broad range of expertise across diverse disciplines, including, robotic planning, spoken language understanding, belief tracking and action management. While there has been a boom in recent years in the development of reusable components for robotic systems within common architectures, such as the Robot Operating System (ROS), little emphasis has been placed on developing components for Human-Robot-Interaction. In this paper we introduce HRItk (the Human-Robot-Interaction toolkit), a framework, consisting of messaging protocols, core-components, and development tools for rapidly building speech-centric interactive systems within the ROS environment. The proposed toolkit was specifically designed for extensibility, ease of use, and rapid development, allowing developers to quickly incorporate speech interaction into existing projects.
I ntroduction
Robots that operate along and with humans in settings such as a home or office are on the verge of becoming a natural part of our daily environment (Bohren et al., 2011, Rosenthal and Veloso 2010, Kanda et al., 2009, Srinivasa et al., 2009. To work cooperatively in these environments, however, they need the ability to interact with people, both known and unknown to them. Natural interaction through speech and gestures is a prime candidate for such interaction, however, the combination of communicative and physical actions, as well as the uncertainty inherent in audio and visual sensing make such systems extremely challenging to create.
Developing speech and gesture-based interactive robots requires a broad range of expertise, including, robotic planning, computer vision, acoustic processing, speech recognition, natural language understanding, belief tracking, as well as dialog management and action selection, among others. This complexity makes it difficult for all but very large research groups to develop complete systems. While there has been a boom in recent years in the development and sharing of reusable components, such as path planning, SLAM and object recognition, within common architectures, such as the Robot Operating System (ROS) (Quigley, 2009), little emphasis has been placed on the development of components for Human-Robot Interaction although despite the growing need for research in this area.
Prior work in Human-Robot Interaction has generally resulted in solutions for specific robotic platforms (Clodic et al., 2008) or standalone frameworks (Fong et al., 2006) that cannot be easily combined with standard architectures used by robotics researchers. Earlier work (Kanda et al., 2009, Fong et al., 2006 has demonstrated the possibilities of multimodal and multiparty interaction on robotic platforms, however, the tasks and interactions explored until now have been extremely limited, due to the complexity of infrastructure required to support such interactions and the expertise required to effectively implement and optimize individual components. To make significant progress, we believe that a common, easy to use, and easily extensible infrastructure, similar to that supported by ROS, is required for multi-modal human-robot interaction. Such a framework will allow researchers to rapidly develop initial speech and gesture-based interactive systems, enabling them to rapidly deploy systems, observe and collect interactions in the field and iteratively improve system components based on observed deficiencies. By using a common architecture and messaging framework, components and component models can easily be upgraded and extended by a community of researchers, while not affecting other components.
Towards this goal we have developed HRItk 1 (Human-Robot-Interaction toolkit), an infrastructure and set of components for developing speech-centric interactive systems within the ROS environment. The proposed toolkit provides the core components required for speech interaction, including, speech recognition, natural language understanding and belief tracking. Additionally it provides basic components for gesture recognition and gaze tracking.
Framework Overview
An overview of the core components in the toolkit are highlighted in Figure 1. We introduce two classes of components required for speech and multimodal interaction into the ROS framework, understanding nodes and tracking services. Understanding nodes are perceptual components that recognize and understand interaction events. Using input from sensors, intermediate processing nodes or other understanding components, these nodes generate hypotheses about current user input. Tracking services monitor the long term and continuous aspects of interaction, including user dialog goals . These services are leveraged by components including Dialog Management and Action Selection to perform interaction. Additionally, these services provide context to understanding nodes enabling them to apply context-specific processing during the understanding phase.
Data Processing Nodes
The understanding components implemented in this work heavily leverage existing components developed in ROS (Quigley et al., 2009). T open-ni_kinect processes depth-images from the Microsoft Kinect sensor, the openni_tracker which performs skeletal tracking, and uvccam which processes color images from external USB cameras. In the near future we also plan to support far-field speech recognition using the HARK_ROS toolkit (Nakadai et al., 2010).
Understanding Nodes
Understanding nodes recognize and understand events observed during interaction. As input they use either data obtained directly from sensors, preprocessed data from intermediate processing nodes or output from other understanding components. They either perform processing on explicit interaction events, such as speech or gesture input, or process continuous input such as joint position or gaze direction. The current understanding nodes implemented within HRItk are listed in Table 1 along with the ROS topics on which they publish.
Understanding nodes publish two forms of messagstate READY, START and STOP}, indicating the state of the node and whether an interaction event has been detected, hypothesis ges which enumerate the most likely observed events along with a likelihood measure for each. The specific struchypothesis is dependent on the event being observed.
State Tracking Services
In addition to understanding specific events such as utterances or gestures, an interactive system needs to track longer term and/or continuous aspects of interaction. Such aspects include user goals, which can span attention (using, e.g., gaze and posture information). These can be defined as characterizing the state of the world (i.e. the user, the interaction, or the environment) at a given time, with possible reference to history. Receives an UPDATED message when the belief changes Belief over the concept set specified in the service request Context indicating system actions potentially affecting belief
In addition, states can be significantly larger objects than individual event understanding results, which could unnecessarily consume significant bandwidth if constantly broadcast. Therefore, state tracking modules use ROS services rather than topics to communicate their output to other modules. Any module can send a message to the tracking service containing a specific query and will receive in response the matching state or belief over states.
In order to allow components to react to changes in the state, each state-tracking module publishes an UPDATED message to its state topic whenever a new state is computed.
Component I mplementations
Speech Detection and Recognition is performed using a ROS node developed around the Julius Speech Recognition Engine (Lee and Kawahara, 2009). We selected this engine for its compatibility with HARK (Nakadai et al, 2010), and its support of common model formats. A wrapper for Julius was implemented in C++ to support the ROS messaging architecture listed in Table 1. Partial hypotheses are output during decoding, and final hypotheses are provided in 1-best, N-best and Confusion Network formats. Context is supported via language model switching.
In order to develop a Speech Recognition component for a new task at minimum two component models are required, a pronunciation dictionary, and a language model (or recognition grammar). Within HRItk we provide the tools required to generate these models from a set of labeled example utterances. We describe the rapid model building procedure in Section 4. Natural Language Understanding is implemented using Conditional Random Fields (Lafferty et al. 2001) similar to the approach described in (Cohn, 2007). For example, given Take this tray to the kitchen listed in Table 3, three concept/value pairs are extracted: Action{Carry}, Object{tray}, Room{kitchen}. Similar to the speech recognition component, the NLU component can be rapidly retrained using a set of tagged example sentences.
Gesture Recognition of simple hand positions is implemented using a Kinect depth sensor and previous work by Fujimura and Xu (2007) for palm/finger segmentation. Currently, the module publishes a hypothesis for the number of fingers raised by the user, though more complex gestures can be implemented based on this model. Gaze Tracking is implemented using ASEF filters (Bolme et al., 2009) and geometric projection. Separate ASEF filters were training to locate the pupils of the left and right eye as well as their inner and outer corners. Filters were trained on hand-labeled images we collected in-house.
Dialog State Tracking is in charge of monitoring aspects of dialog that span multiple turns such as user goal. Our implementation is based on the Hound dialog belief tracking library developed at Honda Research Institute USA. Currently, our belief tracking model is Dynamic Probabilistic Ontology Trees (Raux and Ma 2011), which capture the hidden user goal in the form of a tree-shaped Bayesian Network. Each node in the Goal Network represents a concept that can appear in language and gesture understanding results. The structure of the network indicates (assumed) conditional independence between concepts. With each new input, the network is extended with evidence nodes according to the final understanding hypotheses and the system belief is estimated as the posterior probability of user goal nodes given the evidence so far.
A request to the dialog state tracking service takes the form of a set of concept names, to which the service responds with an m-best list of concept value assignments along with the joint posterior probability.
Rapid System Build Environment
The models required for the core interaction components in the system can be build from a single set of labeled examples Examples.txt concept Structure.txt used by the Dialog State Tracker as shown in Figure 2. Running the automatic build procedure on these two files will generate 3 new models,
The data used to train the language model and pronunciation dictionary used by the Speech Detection and Understanding Node and the statistical CRF-parser applied in the Natural Language Understanding component. Given a set of labeled examples, the three models listed above are trained automatically without any intervention required from the user. Once a system has been deployed, speech input is logged, and can be transcribed and labeled with semantic concepts to improve the effectiveness of these component models.
As explained in section 3.5, our dialog state tracker organizes concepts in a tree structure. For a given domain, we specify that structure in a simple text file where each line contains a concept followed by the name of the parent concept or the keyword ROOT for the root of the tree. Based on this file and on the SLU data file, the resource building process generates the files required by the Hound belief tracker at runtime.
-the-assumes at each node a uniform conditional distribution of children values given the parent value. These distributions are stored in a human-readable text file and can thus be manually updated to more informative values.
Using the above tools, we have developed a sample using the proposed framework for robot navigation task. The entire system can be build from a single set of labeled examples as shown in Figure 3 used to train the language model and a component to perform actions on the SLU output.
Conclusions
In this paper we introduce HRItk (the Human-Robot-Interaction toolkit), a framework, consisting of messaging protocols, components, and development tools for rapidly building speech-centric interactive systems within the ROS environment. The proposed toolkit provides all the core components required for speech interaction, including, speech recognition, natural language understanding and belief tracking and initial implementations for gesture recognition and gaze tracking. The toolkit is specifically designed for extensibility, ease of use, and rapid development, allowing developers to quickly incorporate speech interaction into existing ROS projects.
Figure 1 :
1Overview of core understanding and tracking components within HRItk
Table 1 :
1ROS nodes, Topics, Services and Messages implemented within HRItkROS Node
Topic / Service (* )
Description of M essages
Speech Detection
and Recognition
speech/state
speech/hypothesis
speech/hypothesis/best
speech/hypothesis/final
speech/context
State identifying interaction event, each with a unique eventID
Partial and final hypotheses generated during speech recognition.
Outputs include 1-best, N-best hypotheses and confusion net-
works. All output contains confidence or component model scores
Context indicating dialog-state, domain, task of current interaction
Natural Language
Understanding
dialogact/hypothesis
dialogact/context
Hypotheses of Concept/Value-pairs generated during NLU
Context indicating dialog-state, domain, task of current interaction
Gesture Recognition
hand/hypothesis
hand/context
Hypothesis set of Gesture-Actions with confidence measure
Context indicating domain or task of current interaction
Gaze Tracking
gaze/hypothesis
hand/context
Estimate of gaze direction
Context listing visually salient objects within users field of view
Dialog State
Tracking
dialogstate/state
belief *
dialogstate/context
HRItk is available for download at: http://speech.sv.cmu.edu/HRItk
Examples.txt <Tagged example sentence> <Action>@Room{kitchen} None on the @Floor{fifth} floor None take this @Object{package} to @Room{room 123} Carry Structure.txt <Node> <Parent> Room ROOT Floor Room Object Room
J Bohren, R Rusu, E Jones, E Marder-Eppstein, C Pantofaru, M Wise, L Mosenlechner, W Meeussen, Holzer S , Towards autonomous robotic butlers: Lessons learned with the PR2, Proc. ICRA. Bohren J., Rusu R., Jones E., Marder-Eppstein E., Pantofaru C., Wise M., Mosenlechner L., Meeussen W., and Holzer S. 2011. Towards autonomous robotic butlers: Lessons learned with the PR2, Proc. ICRA 2011
Average of Synthetic Exact Filters. S Bolme, B Draper, J Beveridge, Proc. CVPR. CVPRBolme, S., Draper, B., and Beveridge, J. 2009. Average of Synthetic Exact Filters, Proc. CVPR 2009.
Shary: A Supervision System Adapted to-Human-Robot Interaction. A Clodic, H Cao, S Alili, V Montreuil, R Alami, R Chatila, Proc. ISER. ISERClodic, A., Cao, H., Alili, S., Montreuil, V., Alami, R. and- Chatila, R. 2008. Shary: A Supervision System Adapted to- Human-Robot Interaction. In Proc. ISER 2008.
Scaling conditional random fields for natural language processing. T Cohn, University of MelbourneCohn, T. 2007. Scaling conditional random fields for natural language processing. University of Melbourne.
The Human-Robot Interaction Operating System. T Fong, C Kunz, L Hiatt, M Bugajska, Proc. HRI. HRIFong T., Kunz C., Hiatt L. and Bugajska M. 2006. The Hu- man-Robot Interaction Operating System. Proc. HRI 2006.
Sign recognition using constrained optimization. K Fujimura, L Xu, Proc. ACCV. ACCVFujimura, K. and Xu, L. 2007. Sign recognition using con- strained optimization. Proc. ACCV 2007.
An affective guide robot in a shopping mall. T Kanda, M Shiomi, Z Miyashita, H Ishiguro, N Hagita, Proc. HRI. HRIKanda, T., Shiomi M., Miyashita Z., Ishiguro H., and Hagita N. 2009. An affective guide robot in a shopping mall. In Proc. HRI 2009
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. J Lafferty, A Mccallum, F Pereira, Intl. Conf. on Machine Learning. Lafferty J., McCallum A., and Pereira F.. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Intl. Conf. on Machine Learning, 2001.
Recent Development of Open-Source Speech Recognition Engine Julius. A Lee, T Kawahara, Proc. Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC). Asia-Pacific Signal and Information essing Association Annual Summit and Conference (APSIPA ASC)Lee, A. and Kawahara, T. 2009. Recent Development of Open- Source Speech Recognition Engine Julius. Proc. Asia- Pacific Signal and Information Processing Association An- nual Summit and Conference (APSIPA ASC), 2009.
Design and Implementation of Robot Audition System. K Nakadai, T Takahashi, H G Okuno, H Nakajima, Y Hasegawa, H Tsujino, HARKNakadai, K., Takahashi, T., Okuno, H.G., Nakajima, H., Ha- segawa, Y., and Tsujino, H. 2010. Design and Implementa- tion of Robot Audition System "HARK".
ROS: an open-source robot operating system. M Quigley, B Gerkey, K Conley, J Faust, T Foote, J Leibs, E Berger, R Wheeler, A Ng, Proc. Open-source Software Workshop. Open-source Software WorkshopICRAQuigley, M., Gerkey, B., Conley, K., Faust, J., Foote, T. Leibs, J., Berger, E., Wheeler, R. and Ng, A. 2009. ROS: an open-source robot operating system. Proc. Open-source Software Workshop, ICRA 2009.
Efficient Probabilistic Tracking of User Goal and Dialog History for Spoken Dialog Systems. A Raux, Y Ma, Proc. Interspeech. InterspeechRaux, A. and Ma, Y. 2011. Efficient Probabilistic Tracking of User Goal and Dialog History for Spoken Dialog Systems. Proc. Interspeech 2011.
Using Symbiotic Relationships with Humans to Help Robots Overcome Limitations. S Rosenthal, M Veloso, Workshop for Collaborative Human/AI Control for Interactive Experiences. Rosenthal S., Veloso M. 2010. Using Symbiotic Relationships with Humans to Help Robots Overcome Limitations. In Workshop for Collaborative Human/AI Control for Interac- tive Experiences 2010.
Herb: A Home Exploring Robotic Butler. S Srinivasa, D Ferguson, C Helfrich, D Berenson, A Collet, R Diankov, G Gallagher, G Hollinger, J Kuffner, M Vande-Weghe, Autonomous Robots. Srinivasa S., Ferguson D., Helfrich C., Berenson D., Collet A., Diankov R., Gallagher G., Hollinger G., Kuffner J., Vande- Weghe M. 2009. Herb: A Home Exploring Robotic Butler. Autonomous Robots, 2009 |
5,540,599 | Contextualizing Semantic Representations Using Syntactically Enriched Vector Models | We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first-and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task. | [
9541345,
11182883,
7747235,
8394214,
2252135,
1627782,
1588782,
14695247,
18597583,
3102322,
126584,
15698938
] | Contextualizing Semantic Representations Using Syntactically Enriched Vector Models
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2010. 2010
Stefan Thater
Department of Computational Linguistics
Saarland University
Hagen Fürstenau hagenf@coli.uni-saarland.de
Department of Computational Linguistics
Saarland University
Manfred Pinkal pinkal@coli.uni-saarland.de
Department of Computational Linguistics
Saarland University
Contextualizing Semantic Representations Using Syntactically Enriched Vector Models
Proceedings of the 48th Annual Meeting of the Association for Computational Linguistics
the 48th Annual Meeting of the Association for Computational LinguisticsUppsala, Sweden; cAssociation for Computational LinguisticsJuly 2010. 2010
We present a syntactically enriched vector model that supports the computation of contextualized semantic representations in a quasi compositional fashion. It employs a systematic combination of first-and second-order context vectors. We apply our model to two different tasks and show that (i) it substantially outperforms previous work on a paraphrase ranking task, and (ii) achieves promising results on a wordsense similarity task; to our knowledge, it is the first time that an unsupervised method has been applied to this task.
Introduction
In the logical paradigm of natural-language semantics originating from Montague (1973), semantic structure, composition and entailment have been modelled to an impressive degree of detail and formal consistency. These approaches, however, lack coverage and robustness, and their impact on realistic natural-language applications is limited: The logical framework suffers from overspecificity, and is inappropriate to model the pervasive vagueness, ambivalence, and uncertainty of natural-language semantics. Also, the handcrafting of resources covering the huge amounts of content which are required for deep semantic processing is highly inefficient and expensive.
Co-occurrence-based semantic vector models offer an attractive alternative. In the standard approach, word meaning is represented by feature vectors, with large sets of context words as dimensions, and their co-occurrence frequencies as values. Semantic similarity information can be acquired using unsupervised methods at virtually no cost, and the information gained is soft and gradual. Many NLP tasks have been modelled successfully using vector-based models. Examples include in-formation retrieval (Manning et al., 2008), wordsense discrimination (Schütze, 1998) and disambiguation (McCarthy and Carroll, 2003), to name but a few.
Standard vector-space models have serious limitations, however: While semantic information is typically encoded in phrases and sentences, distributional semantics, in sharp contrast to logic-based semantics, does not offer any natural concept of compositionality that would allow the semantics of a complex expression to be computed from the meaning of its parts. A different, but related problem is caused by word-sense ambiguity and contextual variation of usage. Frequency counts of context words for a given target word provide invariant representations averaging over all different usages of the target word. There is no obvious way to distinguish the different senses of e.g. acquire in different contexts, such as acquire knowledge or acquire shares.
Several approaches for word-sense disambiguation in the framework of distributional semantics have been proposed in the literature (Schütze, 1998;McCarthy and Carroll, 2003). In contrast to these approaches, we present a method to model the mutual contextualization of words in a phrase in a compositional way, guided by syntactic structure. To some extent, our method resembles the approaches proposed by Mitchell and Lapata (2008) and Erk and Padó (2008). We go one step further, however, in that we employ syntactically enriched vector models as the basic meaning representations, assuming a vector space spanned by combinations of dependency relations and words (Lin, 1998). This allows us to model the semantic interaction between the meaning of a head word and its dependent at the micro-level of relation-specific cooccurrence frequencies. It turns out that the benefit to precision is considerable.
Using syntactically enriched vector models raises problems of different kinds: First, the use of syntax increases dimensionality and thus may cause data sparseness (Padó and Lapata, 2007). Second, the vectors of two syntactically related words, e.g., a target verb acquire and its direct object knowledge, typically have different syntactic environments, which implies that their vector representations encode complementary information and there is no direct way of combining the information encoded in the respective vectors.
To solve these problems, we build upon previous work (Thater et al., 2009) and propose to use syntactic second-order vector representations. Second-order vector representations in a bag-ofwords setting were first used by Schütze (1998); in a syntactic setting, they also feature in Dligach and Palmer (2008). For the problem at hand, the use of second-order vectors alleviates the sparseness problem, and enables the definition of vector space transformations that make the distributional information attached to words in different syntactic positions compatible. Thus, it allows vectors for a predicate and its arguments to be combined in a compositional way.
We conduct two experiments to assess the suitability of our method. Our first experiment is carried out on the SemEval 2007 lexical substitution task dataset (McCarthy and Navigli, 2007). It will show that our method significantly outperforms other unsupervised methods that have been proposed in the literature to rank words with respect to their semantic similarity in a given linguistic context. In a second experiment, we apply our model to the "word sense similarity task" recently proposed by Erk and McCarthy (2009), which is a refined variant of a word-sense disambiguation task. The results show a substantial positive effect.
Plan of the paper. We will first review related work in Section 2, before presenting our model in Section 3. In Sections 4 and 5 we evaluate our model on the two different tasks. Section 6 concludes.
Related Work
Several approaches to contextualize vector representations of word meaning have been proposed. One common approach is to represent the meaning of a word a in context b simply as the sum, or centroid of a and b (Landauer and Dumais, 1997). Kintsch (2001) considers a variant of this simple model. By using vector representations of a predicate p and an argument a, Kintsch identifies words that are similar to p and a, and takes the centroid of these words' vectors to be the representation of the complex expression p(a). Mitchell and Lapata (2008), henceforth M&L, propose a general framework in which meaning representations for complex expressions are computed compositionally by combining the vector representations of the individual words of the complex expression. They focus on the assessment of different operations combining the vectors of the subexpressions. An important finding is that component-wise multiplication outperforms the more common addition method. Although their composition method is guided by syntactic structure, the actual instantiations of M&L's framework are insensitive to syntactic relations and word-order, assigning identical representation to dog bites man and man bites dog (see Erk and Padó (2008) for a discussion). Also, they use syntax-free bag-of-words-based vectors as basic representations of word meaning. Erk and Padó (2008), henceforth E&P, represent the meaning of a word w through a collection of vectors instead of a single vector: They assume selectional preferences and inverse selectional preferences to be constitutive parts of the meaning in addition to the meaning proper. The interpretation of a word p in context a is a combination of p's meaning with the (inverse) selectional preference of a. Thus, a verb meaning does not combine directly with the meaning of its object noun, as on the M&L account, but with the centroid of the vectors of the verbs to which the noun can stand in an object relation. Clearly, their approach is sensitive to syntactic structure. Their evaluation shows that their model outperforms the one proposed by M&L on a lexical substitution task (see Section 4). The basic vectors, however, are constructed in a word space similar to the one of the M&L approach.
In Thater et al. (2009), henceforth TDP, we took up the basic idea from E&P of exploiting selectional preference information for contextualization. Instead of using collections of different vectors, we incorporated syntactic information by assuming a richer internal structure of the vector representations. In a small case study, moderate improvements over E&P on a lexical substitution task could be shown. In the present paper, we formulate a general model of syntactically informed contextualization and show how to apply it to a number a of representative lexical substitution tasks. Evaluation shows significant improvements over TDP
The model
In this section, we present our method of contextualizing semantic vector representations. We first give an overview of the main ideas, which is followed by a technical description of first-order and second-order vectors (Section 3.2) and the contextualization operation (Section 3.3).
Overview
Our model employs vector representations for words and expressions containing syntax-specific first and second order co-occurrences information.
The basis for the construction of both kinds of vector representations are co-occurrence graphs. Figure 1 shows the co-occurrence graph of a small sample corpus of dependency trees: Words are represented as nodes in the graph, possible dependency relations between them are drawn as labeled edges, with weights corresponding to the observed frequencies. From this graph, we can directly read off the first-order vector for every word w: the vector's dimensions correspond to pairs (r, w ) of a grammatical relation and a neighboring word, and are assigned the frequency count of (w, r, w ).
The noun knowledge, for instance, would be represented by the following vector:
5 (OBJ −1 ,gain) , 2 (CONJ −1 ,skill) , 3 (OBJ −1 ,acquire) , . . .
This vector talks about the possible dependency heads of knowledge and thus can be seen as the (inverse) selectional preference of knowledge (see Erk and Padó (2008)).
As soon as we want to compute a meaning representation for a phrase like acquire knowledge from the verb acquire together with its direct object knowledge, we are facing the problem that verbs have different syntactic neighbors than nouns, hence their first-order vectors are not easily comparable. To solve this problem we additionally introduce another kind of vectors capturing informations about all words that can be reached with two steps in the co-occurrence graph. Such a path is characterized by two dependency relations and two words, i.e., a quadruple (r, w , r , w ), whose weight is the product of the weights of the two edges used in the path. To avoid overly sparse vectors we generalize over the "middle word" w and build our second-order vectors on the dimensions corresponding to triples (r, r , w ) of two dependency relations and one word at the end of the twostep path. For instance, the second-order vector for acquire is 15 (OBJ,OBJ −1 ,gain) , 6 (OBJ,CONJ −1 ,skill) , 6 (OBJ,OBJ −1 ,buy-back) , 42 (OBJ,OBJ −1 ,purchase) , . . .
In this simple example, the values are the products of the edge weights on each of the paths. The method of computation is detailed in Section 3.2. Note that second order vectors in particular contain paths of the form (r, r −1 , w ), relating a verb w to other verbs w which are possible substitution candidates.
With first-and second-order vectors we can now model the interaction of semantic information within complex expressions. Given a pair of words in a particular grammatical relation like acquire knowledge, we contextualize the secondorder vector of acquire with the first-order vector of knowledge. We let the first-order vector with its selectional preference information act as a kind of weighting filter on the second-order vector, and thus refine the meaning representation of the verb. The actual operation we will use is pointwise multiplication, which turned out to be the best-performing one for our purpose. Interestingly, Mitchell and Lapata (2008) came to the same result in a different setting.
In our example, we obtain a new second-order vector for acquire in the context of knowledge:
75 (OBJ,OBJ −1 ,gain) , 12 (OBJ,CONJ −1 ,skill) , 0 (OBJ,OBJ −1 ,buy-back) , 0 (OBJ,OBJ −1 ,purchase) , . . .
Note that all dimensions that are not "licensed" by the argument knowledge are filtered out as they are multiplied with 0. Also, contextualisation of acquire with the argument share instead of knowledge would have led to a very different vector, which reflects the fact that the two argument nouns induce different readings of the inherently ambiguous acquire.
First and second-order vectors
Assuming a set W of words and a set R of dependency relation labels, we consider a Euclidean vector space V 1 spanned by the set of orthonormal basis vectors { e r,w | r ∈ R, w ∈ W }, i.e., a vector space whose dimensions correspond to pairs of a relation and a word. Recall that any vector of V 1 can be represented as a finite sum of the form ∑ a i e r,w with appropriate scalar factors a i . In this vector space we define the first-order vector [w] of a word w as follows:
[w] = ∑ r∈R w ∈W ω(w, r, w ) · e r,w
where ω is a function that assigns the dependency triple (w, r, w ) a corresponding weight. In the simplest case, ω would denote the frequency in a corpus of dependency trees of w occurring together with w in relation r. In the experiments reported below, we use pointwise mutual information (Church and Hanks, 1990) instead as it proved superior to raw frequency counts:
pmi(w, r, w ) = log p(w, w | r) p(w | r)p(w | r)
We further consider a similarly defined vector space V 2 , spanned by an orthonormal basis { e r,r ,w | r, r ∈ R, w ∈ W }. Its dimensions therefore correspond to triples of two relations and a word. Evidently this is a higher dimensional space than V 1 , which therefore can be embedded into V 2 by the "lifting maps" L r : V 1 → V 2 defined by L r ( e r ,w ) := e r,r ,w (and by linear extension therefore on all vectors of V 1 ). Using these lifting maps we define the second-order vector [[w]] of a word w as
[[w]] = ∑ r∈R w ∈W ω(w, r, w ) · L r [w ]
Substituting the definitions of L r and [w ], this yields For example, if w is a verb, r = OBJ and r = OBJ −1 (i.e., the inverse object relation), then the coefficients of e r,r ,w in [[w]] would characterize the distribution of verbs w which share objects with w.
[[w]] = ∑ r,r ∈R w ∈W ∑ w ∈W ω(w, r, w )ω(w , r , w ) e r,
Composition
Both first and second-order vectors are defined for lexical expressions only. In order to represent the meaning of complex expressions we need to combine the vectors for grammatically related words in a given sentence. Given two words w and w in relation r we contextualize the second-order vector of w with the r-lifted first-order vector of w :
[[w r:w ]] = [[w]] × L r ([w ])
Here × may denote any operator on V 2 . The objective is to incorporate (inverse) selectional preference information from the context (r, w ) in such a way as to identify the correct word sense of w. This suggests that the dimensions of [[w]] should be filtered so that only those compatible with the context remain. A more flexible approach than simple filtering, however, is to re-weight those dimensions with context information. This can be expressed by pointwise vector multiplication (in terms of the given basis of V 2 ). We therefore take × to be pointwise multiplication.
To contextualize (the vector of) a word w with multiple words w 1 , . . . , w n and corresponding relations r 1 , . . . , r n , we compute the sum of the results of the pairwise contextualizations of the target vector with the vectors of the respective dependents:
[[w r 1 :w 1 ,...,r n :
w n ]] = n ∑ k=1 [[w r k :w k ]]
Experiments: Ranking Paraphrases
In this section, we evaluate our model on a paraphrase ranking task. We consider sentences with an occurrence of some target word w and a list of paraphrase candidates w 1 , . . . , w k such that each of the w i is a paraphrase of w for some sense of w. The task is to decide for each of the paraphrase candidates w i how appropriate it is as a paraphrase of w in the given context. For instance, buy, purchase and obtain are all paraphrases of acquire, in the sense that they can be substituted for acquire in some contexts, but purchase and buy are not paraphrases of acquire in the first sentence of Table 1.
Sentence
Paraphrases
Teacher education students will acquire the knowledge and skills required to [. . . ] gain 4; amass 1; receive 1; obtain 1 Ontario Inc. will [. . . ] acquire the remaining IXOS shares [. . . ] buy 3; purchase 1; gain 1; get 1; procure 2; obtain 1 Table 1: Two examples from the lexical substitution task data set
Resources
We use a vector model based on dependency trees obtained from parsing the English Gigaword corpus (LDC2003T05). The corpus consists of news from several newswire services, and contains over four million documents. We parse the corpus using the Stanford parser 1 (de Marneffe et al., 2006) and a non-lexicalized parser model, and extract over 1.4 billion dependency triples for about 3.9 million words (lemmas) from the parsed corpus.
To evaluate the performance of our model, we use various subsets of the SemEval 2007 lexical substitution task (McCarthy and Navigli, 2007) dataset. The complete dataset contains 10 instances for each of 200 target words-nouns, verbs, adjectives and adverbs-in different sentential contexts. Systems that participated in the task had to generate paraphrases for every instance, and were evaluated against a gold standard containing up to 10 possible paraphrases for each of the individual instances.
There are two natural subtasks in generating paraphrases: identifying paraphrase candidates and ranking them according to the context. We follow E&P and evaluate it only on the second subtask: we extract paraphrase candidates from the gold standard by pooling all annotated gold-standard paraphrases for all instances of a verb in all contexts, and use our model to rank these paraphrase candidates in specific contexts. Table 1 shows two instances of the target verb acquire together with its paraphrases in the gold standard as an example. The paraphrases are attached with weights, which correspond to the number of times they have been given by different annotators.
Evaluation metrics
To evaluate the performance of our method we use generalized average precision (Kishida, 2005), a variant of average precision.
Average precision (Buckley and Voorhees, 2000) is a measure commonly used to evaluate systems that return ranked lists of results. Generalized average precision (GAP) additionally rewards the correct order of positive cases w.r.t. their gold standard weight. We define average precision first:
AP = Σ n i=1 x i p i R p i = Σ i k=1 x k i
where x i is a binary variable indicating whether the ith item as ranked by the model is in the gold standard or not, R is the size of the gold standard, and n is the number of paraphrase candidates to be ranked. If we take x i to be the gold standard weight of the ith item or zero if it is not in the gold standard, we can define generalized average precision as follows:
GAP = ∑ n i=1 I(x i ) p i ∑ R i=1 I(y i )y i
where I(x i ) = 1 if x i is larger than zero, zero otherwise, and y i is the average weight of the ideal ranked list y 1 , . . . , y i of gold standard paraphrases. As a second scoring method, we use precision out of ten (P 10 ). The measure is less discriminative than GAP. We use it because we want to compare our model with E&P. P 10 measures the percentage of gold-standard paraphrases in the top-ten list of paraphrases as ranked by the system, and can be defined as follows (McCarthy and Navigli, 2007):
P 10 = Σ s∈M G f (s) Σ s∈G f (s) ,
where M is the list of 10 paraphrase candidates topranked by the model, G is the corresponding annotated gold-standard data, and f (s) is the weight of the individual paraphrases.
Experiment 1: Verb paraphrases
In our first experiment, we consider verb paraphrases using the same controlled subset of the lexical substitution task data that had been used by TDP in an earlier study. We compare our model to various baselines and the models of TDP and E&P, and show that our new model substantially outperforms previous work.
Dataset. The dataset is identical to the one used by TDP and has been constructed in the same way as the dataset used by E&P: it contains those goldstandard instances of verbs that have-according to the analyses produced by the MiniPar parser (Lin, 1993)-an overtly realized subject and object. Gold-standard paraphrases that do not occur in the parsed British National Corpus are removed. 2 In total, the dataset contains 162 instances for 34 different verbs. On average, target verbs have 20.5 substitution candidates; for individual instances of a target verb, an average of 3.9 of the substitution candidates are annotated as correct paraphrases. Below, we will refer to this dataset as "LST/SO."
Experimental procedure. To compute the vector space, we consider only a subset of the complete set of dependency triples extracted from the parsed Gigaword corpus. We experimented with various strategies, and found that models which consider all dependency triples exceeding certain pmiand frequency thresholds perform best. Since the dataset is rather small, we use a fourfold cross-validation method for parameter tuning: We divide the dataset into four subsets, test various parameter settings on one subset and use the parameters that perform best (in terms of GAP) to evaluate the model on the three other subsets. We consider the following parameters: pmi-thresholds for the dependency triples used in the computation of the first-and second-order vectors, and frequency thresholds. The parameters differ only slightly between the four subsets, and the general tendency is that good results are obtained if a low pmi-threshold (≤ 2) is applied to filter dependency triples used in the computation of the second-order vectors, and a relatively high pmi-threshold (≥ 4) to filter dependency triples in the computation of the first-order vectors. Good performing frequency thresholds are 10 or 15. The threshold values for context vectors are slightly different: a medium pmi-threshold between 2 and 4 and a low frequency threshold of 3.
To rank paraphrases in context, we compute contextualized vectors for the verb in the input sen-tence, i.e., a second order vector for the verb that is contextually constrained by the first order vectors of all its arguments, and compare them to the unconstrained (second-order) vectors of each paraphrase candidate, using cosine similarity. 3 For the first sentence in Table 1 Baselines. We evaluate our model against a random baseline and two variants of our model: One variant ("2 nd order uncontexualized") simply uses contextually unconstrained second-order vectors to rank paraphrase candidates. Comparing the full model to this variant will show how effective our method of contextualizing vectors is. The second variant ("1 st order contextualized") represents verbs in context by their first order vectors that specify how often the verb co-occurs with its arguments in the parsed Gigaword corpus. We compare our model to this baseline to demonstrate the benefit of (contextualized) second-order vectors. As for the full model, we use pmi values rather than raw frequency counts as co-occurrence statistics.
Results. For the LST/SO dataset, the generalized average precision, averaged over all instances in the dataset, is 45.94%, and the average P 10 is 73.11%.
Table 2 compares our model to the random baseline, the two variants of our model, and previous work. As can be seen, our model improves about 8% in terms of GAP and almost 7% in terms of P 10 upon the two variants of our model, which in turn perform 10% above the random baseline. We conclude that both the use of second-order vectors, as well as the method used to contextualize them, are very effective for the task under consideration.
The table also compares our model to the model of TDP and two different instantiations of E&P's model. The results for these three models are cited from Thater et al. (2009). We can observe that our model improves about 9% in terms of GAP and about 7% in terms of P 10 upon previous work. Note that the results for the E&P models are based 3 Note that the context information is the same for both words. With our choice of pointwise multiplication for the composition operator × we have ( v 1 × w) · v 2 = v 1 · ( v 2 × w). Therefore the choice of which word is contextualized does not strongly influence their cosine similarity, and contextualizing both should not add any useful information. On the contrary we found that it even lowers performance. Although this could be repaired by appropriately modifying the operator ×, for this experiment we stick with the easier solution of only contextualizing one of the words. Table 2: Results of Experiment 1 on a reimplementation of E&P's original modelthe P 10 -scores reported by Erk and Padó (2009) range between 60.2 and 62.3, over a slightly lower random baseline. According to a paired t-test the differences are statistically significant at p < 0.01.
Performance on the complete dataset. To find out how our model performs on less controlled datasets, we extracted all instances from the lexical substitution task dataset with a verb target, excluding only instances which could not be parsed by the Stanford parser, or in which the target was mistagged as a non-verb by the parser. The resulting dataset contains 496 instances. As for the LST/SO dataset, we ignore all gold-standard paraphrases that do not occur in the parsed (Gigaword) corpus.
If we use the best-performing parameters from the first experiment, we obtain a GAP score of 45.17% and a P 10 -score of 75.43%, compared to random baselines of 27.42% (GAP) and 58.83% (P 10 ). The performance on this larger dataset is thus almost the same compared to our results for the more controlled dataset. We take this as evidence that our model is quite robust w.r.t. different realizations of a verb's subcategorization frame.
Experiment 2: Non-verb paraphrases
We now apply our model to parts of speech (POS) other than verbs. The main difference between verbs on the one hand, and nouns, adjectives, and adverbs on the other hand, is that verbs typically come with a rich context-subject, object, and so on-while non-verbs often have either no dependents at all or only closed class dependents such as determiners which provide only limited contextual informations, if any at all. While we can apply the same method as before also to non-verbs, we might expect it to work less well due to limited contextual Table 3: GAP-scores for non-verb paraphrases using two different methods.
information.
We therefore propose an alternative method to rank non-verb paraphrases: We take the secondorder vector of the target's head and contextually constrain it by the first order vector of the target. For instance, if we want to rank the paraphrase candidates hint and star for the noun lead in the sentence (1) To evaluate the performance of the two methods, we extract all instances from the lexical substitution task dataset with a nominal, adjectival, or adverbial target, excluding instances with incorrect parse or no parse at all. As before, we ignore gold-standard paraphrases that do not occur in the parsed Gigaword corpus.
The results are shown in Table 3, where "M1" refers to the method we used before on verbs, and "M2" refers to the alternative method described above. As one can see, M1 achieves better results than M2 if applied to nouns, while M2 is better than M1 if applied to adjectives and adverbs. The second result is unsurprising, as adjectives and adverbs often have no dependents at all.
We can observe that the performance of our model is similarly strong on non-verbs. GAP scores on nouns (using M1) and adverbs are even higher than those on verbs. We take these results to show that our model can be successfully applied to all open word classes.
Experiment: Ranking Word Senses
In this section, we apply our model to a different word sense ranking task: Given a word w in context, the task is to decide to what extent the different WordNet (Fellbaum, 1998) senses of w apply to this occurrence of w.
Dataset. We use the dataset provided by Erk and McCarthy (2009). The dataset contains ordinal judgments of the applicability of WordNet senses on a 5 point scale, ranging from completely different to identical for eight different lemmas in 50 different sentential contexts. In this experiment, we concentrate on the three verbs in the dataset: ask, add and win.
Experimental procedure. Similar to Pennacchiotti et al. (2008), we represent different word senses by the words in the corresponding synsets. For each word sense, we compute the centroid of the second-order vectors of its synset members. Since synsets tend to be small (they even may contain only the target word itself), we additionally add the centroid of the sense's hypernyms, scaled down by the factor 10 (chosen as a rough heuristic without any attempt at optimization).
We apply the same method as in Section 4.3: For each instance in the dataset, we compute the second-order vector of the target verb, contextually constrain it by the first-order vectors of the verb's arguments, and compare the resulting vector to the vectors that represent the different WordNet senses of the verb. The WordNet senses are then ranked according to the cosine similarity between their sense vector and the contextually constrained target verb vector.
To compare the predicted ranking to the goldstandard ranking, we use Spearman's ρ, a standard method to compare ranked lists to each other. We compute ρ between the similarity scores averaged over all three annotators and our model's predictions. Based on agreement between human judges, Erk and McCarthy (2009) estimate an upper bound ρ of 0.544 for the dataset.
Results. Table 4 shows the results of our experiment. The first column shows the correlation of our model's predictions with the human judgments from the gold-standard, averaged over all instances. All correlations are significant (p < 0.001) as tested by approximate randomization (Noreen, 1989).
The second column shows the results of a frequency-informed baseline, which predicts the ranking based on the order of the senses in Word-Net. This (weakly supervised) baseline outperforms our unsupervised model for two of the three verbs. As a final step, we explored the effect of Table 4: Correlation of model predictions and human judgments combining our rankings with those of the frequency baseline, by simply computing the average ranks of those two models. The results are shown in the third column. Performance is significantly higher than for both the original model and the frequencyinformed baseline. This shows that our model captures an additional kind of information, and thus can be used to improve the frequency-based model.
Conclusion
We have presented a novel method for adapting the vector representations of words according to their context. In contrast to earlier approaches, our model incorporates detailed syntactic information.
We solved the problems of data sparseness and incompatibility of dimensions which are inherent in this approach by modeling contextualization as an interplay between first-and second-order vectors. Evaluating on the SemEval 2007 lexical substitution task dataset, our model performs substantially better than all earlier approaches, exceeding the state of the art by around 9% in terms of generalized average precision and around 7% in terms of precision out of ten. Also, our system is the first unsupervised method that has been applied to Erk and McCarthy's (2009) graded word sense assignment task, showing a substantial positive correlation with the gold standard. We further showed that a weakly supervised heuristic, making use of WordNet sense ranks, can be significantly improved by incorporating information from our system.
We studied the effect that context has on target words in a series of experiments, which vary the target word and keep the context constant. A natural objective for further research is the influence of varying contexts on the meaning of target expressions. This extension might also shed light on the status of the modelled semantic process, which we have been referring to in this paper as "contextualization". This process can be considered one of mutual disambiguation, which is basically the view of E&P. Alternatively, one can conceptualize it as semantic composition: in particular, the head of a phrase incorporates semantic information from its dependents, and the final result may to some extent reflect the meaning of the whole phrase.
Another direction for further study will be the generalization of our model to larger syntactic contexts, including more than only the direct neighbors in the dependency graph, ultimately incorporating context information from the whole sentence in a recursive fashion.
r ,w which shows the generalization over w in form of the inner sum.
, for example, we compute [[acquire SUBJ:student,OBJ:knowledge ]] and compare it to [[gain]], [[amass]], [[buy]], [[purchase]] and so on.
Meet for coffee early, swap leads and get permission to contact if possible.we compute [[swap OBJ:lead ]] and compare it to the
lifted first-order vectors of all paraphrase candi-
dates, L OBJ ([hint]) and L OBJ ([star]), using cosine
similarity.
Word Present paper WN-Freq Combinedask
0.344
0.369
0.431
add
0.256
0.164
0.270
win
0.236
0.343
0.381
average
0.279
0.291
0.361
We use version 1.6 of the parser. We modify the dependency trees by "folding" prepositions into the edge labels to make the relation between a head word and the head noun of a prepositional phrase explicit.
Both TDP and E&P use the British National Corpus.
Acknowledgments. We would like to thank Eduard Hovy and Georgiana Dinu for inspiring discussions and helpful comments. This work was supported by the Cluster of Excellence "Multimodal Computing and Interaction", funded by the German Excellence Initiative, and the project SALSA, funded by DFG (German Science Foundation).
Evaluating evaluation measure stability. Chris Buckley, Ellen M Voorhees, Proceedings of the 23rd Annual International ACM SIGIR Conference on Research and Development in Information Retrieval. the 23rd Annual International ACM SIGIR Conference on Research and Development in Information RetrievalAthens, GreeceChris Buckley and Ellen M. Voorhees. 2000. Evaluat- ing evaluation measure stability. In Proceedings of the 23rd Annual International ACM SIGIR Confer- ence on Research and Development in Information Retrieval, pages 33-40, Athens, Greece.
Word association, mutual information and lexicography. W Kenneth, Patrick Church, Hanks, Computational Linguistics. 161Kenneth W. Church and Patrick Hanks. 1990. Word association, mutual information and lexicography. Computational Linguistics, 16(1):22-29.
Generating typed dependency parses from phrase structure parses. Marie-Catherine De Marneffe, Bill Maccartney, Christopher D Manning, Proceedings of the fifth international conference on Language Resources and Evaluation (LREC 2006). the fifth international conference on Language Resources and Evaluation (LREC 2006)Genoa, ItalyMarie-Catherine de Marneffe, Bill MacCartney, and Christopher D. Manning. 2006. Generating typed dependency parses from phrase structure parses. In Proceedings of the fifth international conference on Language Resources and Evaluation (LREC 2006), pages 449-454, Genoa, Italy.
Novel semantic features for verb sense disambiguation. Dmitriy Dligach, Martha Palmer, Proceedings of ACL-08: HLT, Short Papers. ACL-08: HLT, Short PapersColumbus, OH, USADmitriy Dligach and Martha Palmer. 2008. Novel se- mantic features for verb sense disambiguation. In Proceedings of ACL-08: HLT, Short Papers, pages 29-32, Columbus, OH, USA.
Graded word sense assignment. Katrin Erk, Diana Mccarthy, Proceedings of the 2009 Conference on Empirical Methods in Natural Language Processing. the 2009 Conference on Empirical Methods in Natural Language ProcessingSingaporeKatrin Erk and Diana McCarthy. 2009. Graded word sense assignment. In Proceedings of the 2009 Con- ference on Empirical Methods in Natural Language Processing, pages 440-449, Singapore.
A structured vector space model for word meaning in context. Katrin Erk, Sebastian Padó, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HI, USAKatrin Erk and Sebastian Padó. 2008. A structured vector space model for word meaning in context. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, Honolulu, HI, USA.
Paraphrase assessment in structured vector space: Exploring parameters and datasets. Katrin Erk, Sebastian Padó, Proc. of the Workshop on Geometrical Models of Natural Language Semantics. of the Workshop on Geometrical Models of Natural Language SemanticsAthens, GreeceKatrin Erk and Sebastian Padó. 2009. Paraphrase as- sessment in structured vector space: Exploring pa- rameters and datasets. In Proc. of the Workshop on Geometrical Models of Natural Language Seman- tics, Athens, Greece.
Wordnet: An Electronic Lexical Database. Christiane FellbaumBradford BookChristiane Fellbaum, editor. 1998. Wordnet: An Elec- tronic Lexical Database. Bradford Book.
. Walter Kintsch, Predication. Cognitive Science. 25Walter Kintsch. 2001. Predication. Cognitive Science, 25:173-202.
Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments. Kazuaki Kishida, NII Technical ReportKazuaki Kishida. 2005. Property of average precision and its generalization: An examination of evaluation indicator for information retrieval experiments. NII Technical Report.
A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and representation of knowledge. K Thomas, Susan T Landauer, Dumais, Psychological Review. 1042Thomas K. Landauer and Susan T. Dumais. 1997. A solution to plato's problem: The latent semantic analysis theory of acquisition, induction, and rep- resentation of knowledge. Psychological Review, 104(2):211-240.
Principle-based parsing without overgeneration. Dekang Lin, Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics. the 31st Annual Meeting of the Association for Computational LinguisticsColumbus, OH, USADekang Lin. 1993. Principle-based parsing without overgeneration. In Proceedings of the 31st Annual Meeting of the Association for Computational Lin- guistics, pages 112-120, Columbus, OH, USA.
Automatic retrieval and clustering of similar words. Dekang Lin, Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics. the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics2Dekang Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of the 36th Annual Meeting of the Association for Computational Lin- guistics and 17th International Conference on Com- putational Linguistics, Volume 2, pages 768-774.
Introduction to Information Retrieval. Christopher D Manning, Prabhakar Raghavan, Hinrich Schütze, Cambridge University PressChristopher D. Manning, Prabhakar Raghavan, and Hinrich Schütze. 2008. Introduction to Information Retrieval. Cambridge University Press.
Disambiguating nouns, verbs, and adjectives using automatically acquired selectional preferences. Diana Mccarthy, John Carroll, Computational Linguistics. 294Diana McCarthy and John Carroll. 2003. Disam- biguating nouns, verbs, and adjectives using auto- matically acquired selectional preferences. Compu- tational Linguistics, 29(4):639-654.
SemEval-2007 Task 10: English Lexical Substitution Task. Diana Mccarthy, Roberto Navigli, Proc. of SemEval. of SemEvalPrague, Czech RepublicDiana McCarthy and Roberto Navigli. 2007. SemEval- 2007 Task 10: English Lexical Substitution Task. In Proc. of SemEval, Prague, Czech Republic.
Vector-based models of semantic composition. Jeff Mitchell, Mirella Lapata, Proceedings of ACL-08: HLT. ACL-08: HLTColumbus, OH, USAJeff Mitchell and Mirella Lapata. 2008. Vector-based models of semantic composition. In Proceedings of ACL-08: HLT, pages 236-244, Columbus, OH, USA.
The proper treatment of quantification in ordinary English. Richard Montague, Approaches to Natural Language. Julius Moravcsik, and Patrick Suppes; DordrechtJaakko HintikkaRichard Montague. 1973. The proper treatment of quantification in ordinary English. In Jaakko Hin- tikka, Julius Moravcsik, and Patrick Suppes, editors, Approaches to Natural Language, pages 221-242. Dordrecht.
Computer-intensive Methods for Testing Hypotheses: An Introduction. Eric W Noreen, John Wiley and Sons IncEric W. Noreen. 1989. Computer-intensive Methods for Testing Hypotheses: An Introduction. John Wi- ley and Sons Inc.
Dependency-based construction of semantic space models. Sebastian Padó, Mirella Lapata, Computational Linguistics. 332Sebastian Padó and Mirella Lapata. 2007. Dependency-based construction of semantic space models. Computational Linguistics, 33(2):161-199.
Automatic induction of framenet lexical units. Marco Pennacchiotti, Diego De Cao, Roberto Basili, Danilo Croce, Michael Roth, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HI, USAMarco Pennacchiotti, Diego De Cao, Roberto Basili, Danilo Croce, and Michael Roth. 2008. Automatic induction of framenet lexical units. In Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing, pages 457-465, Hon- olulu, HI, USA.
Automatic word sense discrimination. Hinrich Schütze, Computational Linguistics. 241Hinrich Schütze. 1998. Automatic word sense discrim- ination. Computational Linguistics, 24(1):97-124.
Ranking paraphrases in context. Stefan Thater, Georgiana Dinu, Manfred Pinkal, Proceedings of the 2009 Workshop on Applied Textual Inference. the 2009 Workshop on Applied Textual InferenceSingaporeStefan Thater, Georgiana Dinu, and Manfred Pinkal. 2009. Ranking paraphrases in context. In Proceed- ings of the 2009 Workshop on Applied Textual Infer- ence, pages 44-47, Singapore. |
52,009,979 | GenSense: A Generalized Sense Retrofitting Model | With the aid of recently proposed word embedding algorithms, the study of semantic similarity has progressed and advanced rapidly. However, many natural language processing tasks need sense level representation. To address this issue, some researches propose sense embedding learning algorithms. In this paper, we present a generalized model from the existing sense retrofitting model. The generalization takes three major components: semantic relations between the senses, the relation strength and the semantic strength. In the experiments, we show that the generalized model outperforms the previous approaches in three aspects: semantic relatedness, contextual word similarity and semantic difference. | [
14667200,
14719746,
9711750,
9724599,
294542,
6222768,
51838647,
1957433,
2156506,
9914140
] | GenSense: A Generalized Sense Retrofitting Model
August 20-26. 2018
Yang-Yin Lee
Department of Computer Science and Information Engineering
National Taiwan University No
1, Sec. 4, Roosevelt Road10617TaipeiTaiwan
Ting-Yu Yen
Department of Computer Science and Information Engineering
National Taiwan University No
1, Sec. 4, Roosevelt Road10617TaipeiTaiwan
Hen-Hsen Huang hhhuang@nlg.csie.ntu.edu.tw
Department of Computer Science and Information Engineering
National Taiwan University No
1, Sec. 4, Roosevelt Road10617TaipeiTaiwan
Yow-Ting Shiue
Department of Computer Science and Information Engineering
National Taiwan University No
1, Sec. 4, Roosevelt Road10617TaipeiTaiwan
Hsin-Hsi Chen hhchen@ntu.edu.tw
Department of Computer Science and Information Engineering
National Taiwan University No
1, Sec. 4, Roosevelt Road10617TaipeiTaiwan
MOST Joint Research Center for AI Technology and All Vista Healthcare
Taiwan
GenSense: A Generalized Sense Retrofitting Model
Proceedings of the 27th International Conference on Computational Linguistics
the 27th International Conference on Computational LinguisticsSanta Fe, New Mexico, USAAugust 20-26. 20181662
With the aid of recently proposed word embedding algorithms, the study of semantic similarity has progressed and advanced rapidly. However, many natural language processing tasks need sense level representation. To address this issue, some researches propose sense embedding learning algorithms. In this paper, we present a generalized model from the existing sense retrofitting model. The generalization takes three major components: semantic relations between the senses, the relation strength and the semantic strength. In the experiments, we show that the generalized model outperforms the previous approaches in three aspects: semantic relatedness, contextual word similarity and semantic difference.
Introduction
The distributed representation of word model (word embedding) has drawn great interest in recent years due to its ability to acquire syntactic and semantic information from a large unannotated corpus (Mikolov et al., 2013;Pennington et al., 2014). With the pre-trained word embedding, some researches propose post-processing models that incorporate with the existing semantic knowledge into the word embedding model (Faruqui et al., 2015;Yu and Dredze, 2014). However, word embedding models use only one vector to represent a word, and are problematic in some natural language processing applications that require sense level representation (e.g., word sense disambiguation, semantic relation identification, etc.). As a result, some researches try to resolve the polysemy and homonymy issue and introduce sense level embedding, either act as pre-process (Iacobacci et al., 2015) or post-process fashion.
In this research, we focus on the post-processing sense retrofitting approach and propose GenSense, a generalized sense embedding learning framework that retrofits a pre-trained word embedding via incorporating with the semantic relations between the senses, the relation strength and the semantic strength. Although some parts of the idea are not new, it is the first time of putting all the parts into a generalized framework. Our proposed GenSense for generating low-dimensional sense embedding is inspired from sense retro , but has three major differences. First, we generalize the semantic relations from positive relations (e.g., synonyms, hyponyms, paraphrase, etc.) to positive and negative relations (e.g., antonyms). Second, each relation incorporates with both the semantic strength and the relation strength. Within a semantic relation, there should be a weighting for each semantic strength. For example, although jewel has the synonyms gem and rock, it is clear that the similarity between (jewel, gem) is higher than (jewel, rock), and thus (jewel, gem) should have higher weight. Last, GenSense gives different relations with different relation strengths. For example, if the objective is to train a sense embedding that can distinguish between the positive and negative sense, then the weight for the negative relation (e.g., antonyms) should be higher, and vice versa. The experimental results suggest the relation strengths play a role in balancing the relations and are application dependent. With an objective that considers these three parts, the sense vectors can be learned and updated via running a belief propagation process on the relation constrained network.
In the experiments, we show that our proposed GenSense model outperforms the previous approaches in three types of datasets: semantic relatedness, contextual word similarity and semantic difference. While the generalized model of considering all the relations performs well in the semantic and relatedness tasks, we also find that the antonym relation is in favor of the semantic difference experiment. The remainder of this paper is organized as follows. Section 2 gives a survey on the related works. Section 3 defines the generalized sense retrofitting model. The experimental setup is in Section 4. Section 5 shows and discusses the experimental results. Section 7 concludes the remarks.
Related Works
The study of the representation of words has a long history. Early approaches include utilizing the termdocument occurrence matrix from a large corpus and then perform dimension reduction techniques such as singular value decomposition (latent semantic analysis) (Bullinaria and Levy, 2007;Deerwester et al., 1990). Beyond that, recent word embedding approaches are more focus on neural-style (Dragoni and Petrucci, 2017;Mikolov et al., 2013;Pennington et al., 2014) and performs well on syntactic and semantic tasks. Apart from the unsupervised word embedding learning models, there are plenty of ontologies that contain lexical knowledge, such as WordNet (Fellbaum, 1998), Roget's 21st Century Thesaurus (Kipfer andInstitute, 1993) or the paraphrase database (Pavlick et al., 2015). As a result, many researches combine the word embedding with ontological resources, either in a joint training (Bian et al., 2014;Liu et al., 2016;Yu and Dredze, 2014) or a post-processing (Faruqui et al., 2015) fashion. When the need for sense embedding is getting higher, some researches are inspired from the word level embedding learning model and propose sense level embedding (Iacobacci et al., 2015;Lee and Chen, 2017). Although some evidence shows that the sense embedding cannot improve every natural language processing task (Li and Jurafsky, 2015), the benefit of having a sense embedding for improving tasks that need sense level representation is still in great need (Azzini et al., 2012;Ettinger et al., 2016;Qiu et al., 2016).
Generalized Sense Retrofitting Model
Let = { 1 , … , } be a vocabulary of a trained word embedding and | | be its size. The matrix ̂ will be the pre-trained collection of vector representations ̂∈ ℝ , where is the dimensionality of a word vector. Each ∈ is learned using a standard word embedding technique (e.g., GloVe (Pennington et al., 2014) or Word2Vec (Mikolov et al., 2013)). Let Ω = ( , ) be an ontology that contains the semantic relationship, where = { 1 , … , } is a set of senses and | | is total number of senses. The edge ( , ) ∈ indicates a semantic relationship of interest (e.g., synonym) between and . In our scenario, the edge set consists of several disjoint subsets of interest (i.e., = 1 ∪ 2 ∪ … ∪ ). For example, ( , ) ∈ 1 if and only if is the synonym of . We use ̂ to denote the word form vector of (one should notice that ̂ and ̂ may map to the same vector representation even if ≠ ). Then the goal is to learn a new matrix S = ( 1 , … , ) such that each new sense vector is close to its word form vertex and its synonym neighbors. The basic form that considers only synonym relation for the objective of the sense retrofitting model is:
∑ [ 1 ‖ −̂‖ 2 + 2 ∑ ‖ − ‖ 2 ( , )∈ 1 ] =1
(1)
where balances the importance of the word form vertex and the synonym, and s control the strength of the semantic relations. From equation 1, the learned new sense vectors will close to its synonyms, meanwhile constraining its distance with its original word form vector. In addition, this equation can be further generalized to consider all the relations:
∑ [ 1 ‖ −̂‖ 2 + 2 ∑ ‖ − ‖ 2 ( , )∈ 1 + ⋯ ] =1
(2)
Apart from the positive sense relation, we now introduce three types of special relations. The first one is the positive contextual neighbor relation 2 . ( , ) ∈ 2 if and only if has only one sense. In our model, we use the word form vector to represent the neighbors of the s in 2 . Those neighbors are viewed as positive contextual neighbors as they learned from the context of a corpus (e.g., word2vec trained with Google News corpus) with positive meaning. The second is the negative sense relation 3 (e.g., antonym). The negative senses are used in a subtraction fashion for pushing the sense away from the positive meaning. The last is the negative contextual neighbors 4 . Just like the positive contextual neighbors, the negative contextual neighbors were learned from the context of a corpus, but with negative meaning. Figure 1 illustrates an example of the relation network. In Figure 1, gay may have two meanings: (1) bright and pleasant; promoting a feeling of cheer and (2) someone who is sexually attracted to persons of the same sex. If we focus on the first sense, then our model can attract 1 to its word form vector ̂1 , its synonym 1 its positive contextual neighbor ̂ . But in the same time, it will push 1 from its antonym 1 and its negative contextual neighbor ̂. To formulize the above scenario and consider all the parts, the equation 2 would become:
∑ [ 1 ‖ −̂‖ 2 + 2 ∑ ‖ − ‖ 2 ( , )∈ 1 + 3 ∑ ‖ −̂‖ 2 ( , )∈ 2 =1 − 4 ∑ ‖ − ‖ 2 ( , )∈ 3 − 5 ∑ ‖ −̂‖ 2 ( , )∈ 4 ](3)
We therefore apply an iterative updating method to the solution of the above convex objective function (Bengio et al., 2006). Initially, the sense vectors are set to their corresponding word form vectors (i.e., ←̂ ∀ ). Then in the following iterations, the updating formula for would be: = − 5 ∑: ( , )∈ 4 − 4 ∑ :( , )∈ 3 + 3 ∑: ( , )∈ 2 + 2 ∑ :( , )∈ 1 + 1− 5 ∑ :( , )∈ 4 − 4 ∑ :( , )∈ 3 + 3 ∑ :( , )∈ 2 + 2 ∑ :( , )∈ 1 + 1 (4)
A formal description of our proposed GenSense method is shown in Algorithm 1. In Algorithm 1, the β parameters are retrieved from the ontology. The ϵ is a threshold for deciding whether to update the sense vector or not, which is used as a stopping criteria when the difference between the new sense vector and the original sense vector is too small. Experimentally, 10 iterations are sufficient to minimize the objective function from a set of starting vectors to produce effective sense retrofitted vectors.
Experiments
We evaluate GenSense with three types of experiments: semantic relatedness, contextual word similarities, and semantic difference. In testing phase, if a test dataset has missing words, we use the average of all sense vectors to represent the missing word. Note that our reported results of vanilla sense embedding may be slightly different from the other researches due to the treatment of missing words and the similarity computation method. Some researches use zero vector to represent the missing words, whereas some remove those missing words from the dataset. However, within this research the reported performance can be compared due to the same missing word processing method and the same similarity computation method.
Algorithm 1 GenSense
Input: A pre-trained word embedding ̂, a relation ontology Ω = ( , ), hyper-parameters α and parameters β, number of iterations _ , the convergence criteria for sense vectors ϵ. Output: A trained sense embedding 1: for = 1 do 2: We adopt the GloVe model in our experiment (Pennington et al., 2014). The pre-trained GloVe word embedding is trained on Wikipedia and Gigaword-5 (6B tokens, 400k vocab, uncased, 50d vectors). Roget's 21st Century Thesaurus (Kipfer and Institute, 1993) (Roget) is selected for building ontology in our experiments as it contains the strength information of the senses. As Roget does not provide the ontology directly, we manually built a synonym ontology and an antonym ontology from the resource. The vocabulary from GloVe pre-trained word embedding is used for fetching and building the ontology from Roget. In Roget, there are three levels of synonym relations, we set s to 1.0, 0.6 and 0.3 for the nearest to the farthest synonyms. The antonym relation is built in the same way. For each sense, is set to the sum of all the relation specific weights. Unless specifically address, αs are set to 1 in the experiments. We set the convergence criteria for sense vectors to ϵ = 0.1 with the number of iterations of 10. With the capability of generalization, we run three types of the model: GenSense-syn (only considers the synonyms and positive contextual neighbors), GenSense-ant (only considers the antonyms and negative contextual neighbors) and GenSense-all (considers everything).
Semantic Relatedness
We downloaded four semantic relatedness benchmark datasets from the web: MEN (Bruni et al., 2014), MTurk (Radinsky et al., 2011), Rare Words (RW) (Luong et al., 2013) and WordSim353 (WS353) (Finkelstein et al., 2001). In MEN dataset, there are two versions of the word pairs: lemma and natural form. We show the natural form in the experimental result, but the performances on two datasets are similar. In each dataset, there is a list of word pairs together with their corresponding human rated scores. A higher score value indicates higher semantic similarity. For example, the score of (journey, voyage) is 9.29 and the score of (king, cabbage) is 0.23 in WS353. For measuring the semantic similarity between a word pair ( , ′ ) in the datasets, we adopt the sense evaluation metrics AvgSim and MaxSim (Reisinger and Mooney, 2010):
AvgSim( , ′ ) ≝ 1 ′ ∑ ∑ cos ( , ′ ) ′ =1 =1 (5) MaxSim( , ′ ) ≝ max 1≤ ≤ ,1≤ ≤ ′ cos ( , ′ )(6)
where and ′ denote the number of senses of and ′ , respectively. The AvgSim can be seen as a soft metric as it averages all the similarity scores. Whereas the MaxSim can be seen as a hard metric as it only selects the senses with maximum similarity score. For measuring the performance of the sense embedding, we compute the spearman correlation between the human rated scores and the AvgSim/MaxSim scores. Table 1 shows a summary of the benchmark datasets and their relationship with the ontologies. In Table 1, row 3 shows the number of words that are both listed in the datasets and the ontology. The word count in Roget is 63,942.
MEN
Contextual Word Similarity
Although the semantic relatedness datasets are used in many researches, one major disadvantage is that the words in those word pairs do not have contexts. Therefore, we also conduct experiments with the Stanford's Contextual Word Similarities (SCWS) dataset (Huang et al., 2012). SCWS consists of 2,003 word pairs together with human rated scores. A higher score value indicates higher semantic similarity. Different from the semantic relatedness datasets, the words in the SCWS have their contexts and partof-speech tags. That is, the human subjects can know the usage of the word when they rate the similarity. For each word pair, we compute its AvgSimC/MaxSimC scores from the learned sense embedding (Reisinger and Mooney, 2010):
AvgSimC( , ′ ) ≝ 1 2 ∑ ∑ , , ′ , ′ , ( ( ), ( ′ )) K =1 =1 (7) MaxSimC( , ′ ) ≝ (̂( ),̂( ′ )) (8)
where , , ≝ ( ( ), ( )) is the likelihood of context belonging to cluster , and ̂( ) ≝ arg max 1≤ ≤ , , ( ), the maximum likelihood cluster for in context . We use a window size of 5 for the words in the word pairs (i.e., 5 words prior to / ′ and 5 words after / ′ ). Stop words are removed from the context. For measuring the performance, we compute the spearman correlation between the human rated scores and the AvgSimC/MaxSimC scores.
Semantic Difference
This task is defined to answer if a word has a closer semantic feature to a concept than another word (Krebs and Paperno, 2016). In this dataset, there are 528 concepts, 24,963 word pairs, and 128,515 items. Each word pair comes with a feature. For example, in the test ( , ℎ ):
, choosing the first word if and only if cos( , ) > (ℎ , ), otherwise, choose the second word. As this dataset does not provide context for disambiguation, we use the similar strategies from the semantic relatedness task:
AvgSimD( , ′ ) ≝ 1 ′ ∑ ∑ cos ( , ′ ) ′ =1 =1 (9) MaxSimD( , ′ ) ≝ max 1≤ ≤ ,1≤ ≤ ′ cos ( , ′ )(10)
In AvgSimD, we choose the first word iff AvgSimD( 1 , ′ ) > AvgSimD( 2 , ′ ) . In MaxSimD, we choose the first word iff MaxSimD( 1 , ′ ) > MaxSimD( 2 , ′ ). The performance is determined by computing the accuracy. Table 2 shows the spearman correlation (ρ × 100) of AvgSim and MaxSim between human scores and sense embedding's scores on each benchmark dataset. Row 2 shows the performance of vanilla GloVe word embedding. Note that the MaxSim and AvgSim scores will be the same when there is only one sense for each word (word embedding). Row 3 shows the performance of the retro model . Table 2. ρ × 100 of (MaxSim / AvgSim) on semantic relatedness benchmark datasets.
Results and Discussion
MEN
From Table 2, we find that our proposed model outperforms the comparison models retro and GloVe in all the datasets. When comparing our model with retro, the spearman correlation scores of MaxSim of each dataset grows at least 6.4. In RW, the spearman correlation score of GenSense exceed retro by 18.2. We also discover a significant growth of spearman correlation between GenSense-syn and GenSense-all. Surprisingly, the model that only adopts synonyms and positive contextual information can outperform retro and GloVe. After utilizing antonym knowledge from Roget, its performance can further be improved in all but RW dataset. This result supports an assumption that the antonyms in Roget are quite informative and useful. Moreover, GenSense can adapt information from synonyms and antonyms to boost its performance. Although our model can pull sense vectors away from its reverse sense with the help of antonym and negative contextual information. This shift cannot guarantee the new sense vectors will move to a better place with only negative relations. As a result, the GenSense-ant does not perform as well as GenSense-syn. Table 2 also shows the macro-averaged and micro-averaged results in the rightmost two columns. In both of the additional evaluation metrics, we find that the GenSense model outperforms retro with a large margin. These two metrics suggest the robustness of our proposed model when comparing to the retro model.
We also conduct an experiment to test how much benefit we can get from the relation strength. We run GenSense-syn over the Roget ontology with a grid of ( 1 , 2 , 3 ) parameters. Each parameter is tuned from 0.0 to 1.0 with a 0.1 step size. Table 3 shows the results with MaxSim metric and Table 4 shows the results with AvgSim metric. Note that the 1 / 2 / 3 parameter combinations of the worst or the best case may be more than one. In that case, we only report one 1 / 2 / 3 setting in Table 3 and Table 4 due to the space limitation. From Table 3, we find that the default setting can achieve relatively good results when comparing to the best case. Another point worth mentioning is that the worst performance happens under .1/1./.1 setting except the WS353 dataset. Similar results can be found in Table 4's AvgSim metric. The results demonstrate the importance of the original word vector and the synonyms sense vectors in the model. Table 4. ρ × 100 of AvgSim on semantic relatedness benchmark datasets. Figure 2 shows the ρ × 100 of MaxSim on the semantic relatedness benchmark datasets as function of vector dimension. All GloVe pre-trained models are trained on the 6 billion tokens corpus of 50d, 100d, 200d and 300d. We use the GenSense-all model on the GloVe pre-trained models. Figure 2 shows the proposed GenSense-all outperforms GloVe in all the datasets of all the tested dimensions. In GloVe's original paper, they showed GloVe's performance (in terms of accuracy) is proportional to the dimension in the range within 50d and 300d. In this experiment, we show that both GloVe and GenSense-all's performance is proportional to the dimension in the range within 50d and 300d in terms of ρ × 100 of MaxSim. Similar results can be found in the AvgSim metric. Table 5 shows the selected MEN's word pairs and its corresponding GenSense-all, GloVe and retro scores for case study. For GenSense-all, GloVe and retro, we use the MaxSim scores and then sort and re-scale to MEN's score distribution. From Table 5, we find that Gensense-all can improve pre-trained word-embedding model (in terms of closeness to MEN's score, smaller is better) in the following situations: (1) both words have a few senses (lizard, reptiles), (2) both words have many senses (stripes, train) and (3) one word has many senses and one word has a few senses (rail, railway). Sometimes, the retro model increases the closeness to MEN's score. In other words, GenSense-all can handle all the possible situations well and outperform retro. Table 5. Selected MEN's word pairs and their scores difference with GenSense-all, GloVe and retro models. (the smaller the better). Table 6 shows the spearman correlation (ρ × 100) of Stanford's Contextual Word Similarity dataset. With the sense level information, both GenSense and retro can outperform the word embedding model GloVe. The GenSense model performs slightly better than retro. Again, we find that the retrofitting model cannot benefit with only negative relation information.
MEN
SCWS
GloVe 52.9 retro 54.2/55.9 GenSense-syn 54.8/56.0 GenSense-ant 52.9/52.7 GenSense-all 54.2/55.3 Table 6. ρ × 100 of (MaxSimC / AvgSimC) on SCWS dataset. Table 7 shows the results of the semantic difference experiment. From Table 7, we find that GenSense outperforms retro and GloVe. The accuracy of retro declines in this experiment. This finding demonstrates the effectiveness and robustness of our proposed framework. Surprisingly, the antonym relation plays an important role when computing the semantic difference. Table 7. (Accuracy, Precision, Recall) × 100 of (MaxSimD / AvgSimD) on the semantic difference dataset.
Conclusion
In this paper we present GenSense, a generalized framework for learning sense embedding. The generalization takes in three parts: (1) we extend the synonym relation to positive contextual neighbor relation, antonym relation and negative contextual neighbor relation; (2) within each relation, we consider the semantic strength; and (3) we use relation strength between relations to balance different components. We then conduct experiments in three types of experiments: semantic relatedness, contextual word similarity, and semantic difference and show that the GenSense model outperforms the previous approaches. In the future, one of the possible applications is to apply the generalized sense representation learnt by the proposed method in downstream natural language processing applications to conduct extrinsic evaluations. We release the source code and the pre-trained model as resource for the research community. 12 Other versions of the sense retrofitted embeddings can be found in the website.
Figure 1 .
1An illustration of the relation network. Different textures of the nodes represent different roles (e.g., synonym, antonym, etc.) in the GenSense model.
Figure 2 .
2ρ × 100 of MaxSim on semantic relatedness benchmark datasets as function of vector dimension. GloVe model is compared.
AcknowledgementsThis research was partially supported by Ministry of Science and Technology, Taiwan, under grants MOST-107-2634-F-002-019, MOST-107-2634-F-002-011 and MOST-106-2923-E-002-012-MY3.
A neuro-evolutionary corpus-based method for word sense disambiguation. Antonia Azzini, Célia Da, Costa Pereira, Mauro Dragoni, Andrea Gb Tettamanzi, IEEE Intelligent Systems. 276Antonia Azzini, Célia da Costa Pereira, Mauro Dragoni, and Andrea GB Tettamanzi. 2012. A neuro-evolutionary corpus-based method for word sense disambiguation. IEEE Intelligent Systems, 27(6):26-35.
Label Propagation and Quadratic Criterion. Yoshua Bengio, Olivier Delalleau, Nicolas Le Roux, Semi-Supervised Learning. Olivier Chapelle, Bernhard Schölkopf, and Alexander ZienMIT PressYoshua Bengio, Olivier Delalleau, and Nicolas Le Roux. 2006. Label Propagation and Quadratic Criterion. In Olivier Chapelle, Bernhard Schölkopf, and Alexander Zien, editors, Semi-Supervised Learning, pages 193-216. MIT Press.
Knowledge-powered deep learning for word embedding. Jiang Bian, Bin Gao, Tie-Yan Liu, Joint European Conference on Machine Learning and Knowledge Discovery in Databases. SpringerJiang Bian, Bin Gao, and Tie-Yan Liu. 2014. Knowledge-powered deep learning for word embedding. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, pages 132-148. Springer.
Multimodal distributional semantics. Elia Bruni, Nam-Khanh Tran, Marco Baroni, The Journal of Artificial Intelligence Research. 49Elia Bruni, Nam-Khanh Tran, and Marco Baroni. 2014. Multimodal distributional semantics. The Journal of Artificial Intelligence Research, 49:1-47.
Extracting semantic representations from word co-occurrence statistics: A computational study. A John, Joseph P Bullinaria, Levy, Behavior research methods. 393John A Bullinaria and Joseph P Levy. 2007. Extracting semantic representations from word co-occurrence statistics: A computational study. Behavior research methods, 39(3):510-526.
Indexing by latent semantic analysis. C Scott, Susan T Deerwester, Richard A Dumais, Harshman, Journal of the American society for information science. 416391Scott C. Deerwester, Susan T Dumais, and Richard A. Harshman. 1990. Indexing by latent semantic analysis. Journal of the American society for information science, 41(6):391.
A neural word embeddings approach for multi-domain sentiment analysis. Mauro Dragoni, Giulio Petrucci, IEEE Transactions on Affective Computing. 84Mauro Dragoni and Giulio Petrucci. 2017. A neural word embeddings approach for multi-domain sentiment analysis. IEEE Transactions on Affective Computing, 8(4):457-470.
Retrofitting sense-specific word vectors using parallel text. Allyson Ettinger, Philip Resnik, Marine Carpuat, Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesAllyson Ettinger, Philip Resnik, and Marine Carpuat. 2016. Retrofitting sense-specific word vectors using parallel text. In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1378-1383.
Retrofitting word vectors to semantic lexicons. Manaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, Noah A Smith, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoManaal Faruqui, Jesse Dodge, Sujay Kumar Jauhar, Chris Dyer, Eduard Hovy, and Noah A Smith. 2015. Retrofitting word vectors to semantic lexicons. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 1606-1615, Denver, Colorado.
. Christiane Fellbaum, Wiley Online LibraryChristiane Fellbaum. 1998. WordNet. Wiley Online Library.
Placing search in context: The concept revisited. Lev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, Eytan Ruppin, Proceedings of the 10th international conference on World Wide Web. the 10th international conference on World Wide WebLev Finkelstein, Evgeniy Gabrilovich, Yossi Matias, Ehud Rivlin, Zach Solan, Gadi Wolfman, and Eytan Ruppin. 2001. Placing search in context: The concept revisited. In Proceedings of the 10th international conference on World Wide Web, pages 406-414.
Improving word representations via global context and multiple word prototypes. Eric H Huang, Richard Socher, Christopher D Manning, Andrew Y Ng, Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics. the 50th Annual Meeting of the Association for Computational LinguisticsJeju, Republic of KoreaEric H. Huang, Richard Socher, Christopher D. Manning, and Andrew Y. Ng. 2012. Improving word representations via global context and multiple word prototypes. In Proceedings of the 50th Annual Meeting of the Association for Computational Linguistics, pages 873-882, Jeju, Republic of Korea.
SensEmbed: learning sense embeddings for word and relational similarity. Ignacio Iacobacci, Mohammad Taher Pilehvar, Roberto Navigli, Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Beijing, ChinaLong Papers1Proceedings of the 53rd Annual Meeting of the AssociationIgnacio Iacobacci, Mohammad Taher Pilehvar, and Roberto Navigli. 2015. SensEmbed: learning sense embeddings for word and relational similarity. In Proceedings of the 53rd Annual Meeting of the Association 1 http://nlg.csie.ntu.edu.tw/nlpresource/GenSense 2 https://github.com/y95847frank/GenSense for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Papers), pages 95-105, Beijing, China.
Ontologically grounded multi-sense representation learning for semantic vector space models. Chris Sujay Kumar Jauhar, Eduard Dyer, Hovy, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesSujay Kumar Jauhar, Chris Dyer, and Eduard Hovy. 2015. Ontologically grounded multi-sense representation learning for semantic vector space models. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, pages 683-693.
Roget's 21st Century Thesaurus in Dictionary Form: The Essential Reference for Home, School, or Office. B A , Dell PubKipfer and Princeton Language InstituteB.A. Kipfer and Princeton Language Institute. 1993. Roget's 21st Century Thesaurus in Dictionary Form: The Essential Reference for Home, School, or Office. Dell Pub.
Capturing discriminative attributes in a distributional space: Task proposal. Alicia Krebs, Denis Paperno, Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP. the 1st Workshop on Evaluating Vector-Space Representations for NLPAlicia Krebs and Denis Paperno. 2016. Capturing discriminative attributes in a distributional space: Task proposal. In Proceedings of the 1st Workshop on Evaluating Vector-Space Representations for NLP, pages 51-54.
MUSE: Modularizing unsupervised sense embeddings. Guang-He Lee, Yun-Nung Chen, Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing. the 2017 Conference on Empirical Methods in Natural Language ProcessingCopenhagen, DenmarkGuang-He Lee and Yun-Nung Chen. 2017. MUSE: Modularizing unsupervised sense embeddings. In Proceedings of the 2017 Conference on Empirical Methods in Natural Language Processing, pages 327-337, Copenhagen, Denmark.
Do multi-sense embeddings improve natural language understanding?. Jiwei Li, Dan Jurafsky, Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing. the 2015 Conference on Empirical Methods in Natural Language ProcessingLisbon, PortugalJiwei Li and Dan Jurafsky. 2015. Do multi-sense embeddings improve natural language understanding? In Proceedings of the 2015 Conference on Empirical Methods in Natural Language Processing, pages 1722-1732, Lisbon, Portugal.
Constraining word embeddings by prior knowledgeapplication to medical information retrieval. Xiaojie Liu, Jian-Yun Nie, Alessandro Sordoni, Asia Information Retrieval Symposium. Xiaojie Liu, Jian-Yun Nie, and Alessandro Sordoni. 2016. Constraining word embeddings by prior knowledge- application to medical information retrieval. In Asia Information Retrieval Symposium, pages 155-167.
Better word representations with recursive neural networks for morphology. Thang Luong, Richard Socher, Christopher D Manning, Proceedings of the Seventeenth Conference on Computational Natural Language Learning. the Seventeenth Conference on Computational Natural Language LearningThang Luong, Richard Socher, and Christopher D Manning. 2013. Better word representations with recursive neural networks for morphology. In Proceedings of the Seventeenth Conference on Computational Natural Language Learning, pages 104-113.
Efficient estimation of word representations in vector space. Tomas Mikolov, Kai Chen, Greg Corrado, Jeffrey Dean, arXiv:1301.3781arXiv preprintTomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781.
PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. Ellie Pavlick, Pushpendre Rastogi, Juri Ganitkevich, Chris Callison-Burch Ben Van Durme, Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language ProcessingBeijing, ChinaShort PapersEllie Pavlick, Pushpendre Rastogi, Juri Ganitkevich, and Chris Callison-Burch Ben Van Durme. 2015. PPDB 2.0: Better paraphrase ranking, fine-grained entailment relations, word embeddings, and style classification. In Proceedings of the 53rd Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Short Papers), pages 425-430, Beijing, China.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing. the 2014 Conference on Empirical Methods in Natural Language ProcessingDoha, QatarJeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word representation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing, pages 1532-1543, Doha, Qatar.
Context-dependent sense embedding. Lin Qiu, Kewei Tu, Yong Yu, Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing. the 2016 Conference on Empirical Methods in Natural Language ProcessingLin Qiu, Kewei Tu, and Yong Yu. 2016. Context-dependent sense embedding. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, pages 183-191.
A word at a time: computing word relatedness using temporal semantic analysis. Kira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, Shaul Markovitch, Proceedings of the 20th international conference on World wide web. the 20th international conference on World wide webKira Radinsky, Eugene Agichtein, Evgeniy Gabrilovich, and Shaul Markovitch. 2011. A word at a time: computing word relatedness using temporal semantic analysis. In Proceedings of the 20th international conference on World wide web, pages 337-346.
Multi-prototype vector-space models of word meaning. Joseph Reisinger, J Raymond, Mooney, Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics. Joseph Reisinger and Raymond J Mooney. 2010. Multi-prototype vector-space models of word meaning. In Human Language Technologies: The 2010 Annual Conference of the North American Chapter of the Association for Computational Linguistics, pages 109-117.
Improving lexical embeddings with semantic knowledge. Mo Yu, Mark Dredze, Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics. the 52nd Annual Meeting of the Association for Computational Linguistics2Mo Yu and Mark Dredze. 2014. Improving lexical embeddings with semantic knowledge. In Proceedings of the 52nd Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Papers), volume 2, pages 545-550. |
258,765,279 | Probing structural constraints of negation in Pretrained Language Models | Contradictory results about the encoding of the semantic impact of negation in pretrained language models (PLMs) have been drawn recently (e.g. Kassner and Schütze (2020); Gubelmann and Handschuh(2022)). In this paper we focus rather on the way PLMs encode negation and its formal impact, through the phenomenon of the Negative Polarity Item (NPI) licensing in English. More precisely, we use probes to identify which contextual representations best encode 1) the presence of negation in a sentence, and 2) the polarity of a neighboring masked polarity item. We find that contextual representations of tokens inside the negation scope do allow for (i) a better prediction of the presence of not compared to those outside the scope and (ii) a better prediction of the right polarity of a masked polarity item licensed by not, although the magnitude of the difference varies from PLM to PLM. Importantly, in both cases the trend holds even when controlling for distance to not. This tends to indicate that the embeddings of these models do reflect the notion of negation scope, and do encode the impact of negation on NPI licensing. Yet, further control experiments reveal that the presence of other lexical items is also better captured when using the contextual representation of a token within the same syntactic clause than outside from it, suggesting that PLMs simply capture the more general notion of syntactic clause. | [
52967399,
248780296,
226283916,
248779917,
218628691,
3882934,
21735129,
218502143
] | Probing structural constraints of negation in Pretrained Language Models
Association for Computational LinguisticsCopyright Association for Computational LinguisticsMay 22-24, 2023 c 2023
David Kletz david.kletz@sorbonne-nouvelle.fr
Université Paris Cité & LLF (CNRS/UPC
Université Sorbonne Nouvelle & Lattice
CNRS/ENS-PSL
USN
Marie Candito marie.candito@u-paris.fr
Université Paris Cité & LLF (CNRS/UPC
Pascal Amsili pascal.amsili@ens.fr
Université Sorbonne Nouvelle & Lattice
CNRS/ENS-PSL
USN
Probing structural constraints of negation in Pretrained Language Models
Proceedings of the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)
the 24th Nordic Conference on Computational Linguistics (NoDaLiDa)Association for Computational LinguisticsMay 22-24, 2023 c 2023
Contradictory results about the encoding of the semantic impact of negation in pretrained language models (PLMs) have been drawn recently (e.g. Kassner and Schütze (2020); Gubelmann and Handschuh(2022)). In this paper we focus rather on the way PLMs encode negation and its formal impact, through the phenomenon of the Negative Polarity Item (NPI) licensing in English. More precisely, we use probes to identify which contextual representations best encode 1) the presence of negation in a sentence, and 2) the polarity of a neighboring masked polarity item. We find that contextual representations of tokens inside the negation scope do allow for (i) a better prediction of the presence of not compared to those outside the scope and (ii) a better prediction of the right polarity of a masked polarity item licensed by not, although the magnitude of the difference varies from PLM to PLM. Importantly, in both cases the trend holds even when controlling for distance to not. This tends to indicate that the embeddings of these models do reflect the notion of negation scope, and do encode the impact of negation on NPI licensing. Yet, further control experiments reveal that the presence of other lexical items is also better captured when using the contextual representation of a token within the same syntactic clause than outside from it, suggesting that PLMs simply capture the more general notion of syntactic clause.
Introduction
Negation has recently been the focus of various works aiming at determining the abilities of Pre-trained Language Models (PLMs) to capture linguistic knowledge.
Some works investigate the 'semantic impact' of negation, namely its impact in terms of truth values, by interpreting how the presence of negation impacts the probability distribution at a masked position. The rationale is that negating a verb reverses the truth value of its clause, which should be reflected in the probability distribution at certain positions. Ettinger (2020); Kassner and Schütze (2020) use factual statements such as (1), and report that models output similar distributions for the positive and negative variants of (1), and conclude that models largely ignore negation.
(1)
A
robin is (not) a [MASK]
Gubelmann and Handschuh (2022) chose to avoid factual statements and to focus rather on multi-sentence self-contained examples, such that, given the context provided by the first sentence, one particular word is either likely (in positive items) or ruled out (in negative items) at a masked position in the second sentence. Because this particular word is substantially less often the top-1 prediction in the negative items than in the positive items, the authors draw the opposite conclusion that PLMs do show sensitivity to negation.
A different line of works focused on finding out to what extent negation is encoded in PLM embeddings. Celikkanat et al. (2020) train classifiers taking as input the contextual embedding of a verb or its subject or direct object, and predicting whether the verb is negated or not. The resulting high accuracy allows them to conclude that these tokens' embeddings do contain "traces" of not. More generally, several authors have investigated whether the contextual representation of a token encodes information about surrounding tokens. To ease further reading, we will talk of a classifier taking as input an input embedding, i.e. the contextual representation of an input token, and predicting some target information about another token in the sentence. For instance, Klafka and Ettinger (2020) study how input embeddings encode animacy, gender, and number of surrounding words in a specific SVO context. Li et al. (2022) target the number feature of French participles in the context of object-past participle agreement. They show that the performance of the classifier depends on the syntactic position of the input token in the sentence. We will build on their idea to compare performance at predicting target information depending on the syntactic zone the input token belongs to. In this paper, one of the probed target information will be the presence or absence of a given word within the sentence, which we call the target token.
More precisely, our aim is to study PLMs' ability to capture and encode structural information concerning negation (namely negation scope). To do so we first probe whether input embeddings can serve to accurately predict the presence or absence of a target not. 1 Moreover, we wish to test PLMs' ability to actually mobilize this encoding to capture phenomena that are direct consequences of the presence of negation. To do so, we focus on the licensing of Negative Polarity Items (NPI) by not modifying a verb. Polarity Items (PI), either positive (e.g. some), or negative (e.g. any), are words or expressions that are constrained in their distribution (Homer, 2020). A NPI will require that a word or a construction, called the licensor, be in the vicinity. More precisely, the licensor itself grammatically defines a zone of the sentence, called the licensing scope, in which the NPI can appear. The adverb not modifying a verb is one such licensor. While any is licensed by negation in (2-a) vs. (2-b), it is not licensed in (2-c), even though the verb is negated, arguably because it is not in the licensing scope 2 .
(2) a. Sam didn't find any books. b. *Sam found any books. c. *Any book was not found by Sam. Jumelet and Hupkes (2018) have shown that LSTM embeddings do encode the notion of licensing scope (given an input embedding, a clas- 1 We restrict our probing to not, which is by far the most frequent negation clue (57% of the occurrences, while the second most frequent, no, accounts for 21% of occurrences). 2 We leave aside the uses of any and the like having free choice interpretations, as for instance in "Pick any card". sifier can predict the structural zone the input token belongs to), a finding later confirmed for transformer-based PLMs (Warstadt et al., 2019). Focusing on when the licensor is a verb-modifying not, we rather investigate whether this encoding of the zones go as far as enabling a better prediction of a PI's polarity from inside the licensing scope compared to outside the scope. So instead of the question "Is this input embedding the embedding of a token located within, before or after the licensing scope?", we rather ask the question "Given a masked PI position, and an input embedding of a neighboring token, what is the polarity of the PI?", and we study whether this question is better answered when the input embedding is inside or outside the licensing or negation scopes.
Note that our methodology differs from that of Jumelet and Hupkes (2018), who, given an input token, predict the zone this token belongs to. We instead predict the polarity of a neighboring masked polarity item and then compare accuracies depending on the input token's zone. Our motivation is that the polarity, being a lexical information, requires less linguistic preconception, and hence our probing method is a more direct translation of the NPI licensing phenomenon: we study whether and where the information of "which PIs are licit where?" is encoded, in the context of sentence negation. This method also allows us to better control the confounding factor of distance between the input embedding and the licensor not.
In the following, we define the linguistic notions of negation scope and NPI licensing scope in section 2, and show how we actually identified them in English sentences. In section 3, we describe our probing experiments and discuss their results, both for the encoding of not (section 3.1), and the encoding of NPI licensing (section 3.2). We then study the more general ability of PLMs to deal with clause boundaries (section 4), and conclude in section 5.
2 Defining and identifying scopes 2.1 Negation scope From a linguistic point of view, the scope of a negation cue is the area of the sentence whose propositional content's truth value is reversed by the presence of the cue. While in many cases it is sufficient to use the syntactic structure to recover the scope, in some cases semantics or even prag-matics come into play. 3 Nevertheless, annotation guidelines usually offer syntactic approximations of negation scope.
To identify the negation scope for not 4 modifying a verb, we followed the syntactic constraints that can be inferred from the guidelines of Morante and Blanco (2012). Note though that these guidelines restrict the annotation to factual eventualities, leaving aside e.g. negated future verbs. We did not retain such a restriction, hence our identification of the negation scope is independent from verb tense or modality.
NPI licensing scope
Polarity items are a notoriously complex phenomenon. To identify the NPI licensing scope, we focus on specific syntactic patterns defined by Jumelet and Hupkes (2018), retaining only those involving not as licensor. 5 Table 1 shows an example for each retained pattern (hereafter the negpatterns), with the NPI licensing scope in blue.
Importantly, in the neg-patterns, the licensing scope is strictly included in the negation scope: within the clause of the negated verb, the tokens to its left belong to the negation scope but not to the licensing scope. E.g. in (3), anyone is not licit as a subject of going, whether the location argument is itself a plain PP, a NPI or a PPI (3-b).
(3) a. I'm not going anywhere. b. *Anyone is not going to the party/ somewhere/anywhere.
We thus defined 4 zones for the not+NPI sentences, exemplified in Table 1: PRE (tokens before both scopes), PRE-IN (to the left of the licensing scope, but within the negation scope), IN (in both scopes), and POST (after both scopes). We note though that the restriction exemplified in (3-b) only holds for non-embedded NPIs (de Swart, 1998), so examples like (4), with an embedded NPI in the subject of the negated verb (hence belonging to our PRE-IN zone), are theoretically possible.
(4) Examples with any relevance to that issue didn't come up in the discussion.
Yet in practice, we found that they are extremely rare: using the Corpus of Contemporary American English (COCA, Davies 2015) 6 , we extracted sentences matching one of the negpatterns, and among these, sentences having any or any-body/one/thing/time/where in the IN zone, the PRE-IN zone or both. As shown in Table 2, any* in the PRE-IN zone are way rarer than in the classical licensing scope (IN zone) 7 . Hence we sticked to the usual notion of direct NPI licensing scope, as illustrated in Table 1.
Building the not+NPI test set
Having defined these structural zones, we could use them to probe the traces they carry and compare the magnitude of these traces across the four zones. To do so, we built a test set of COCA sentences containing not licensing a NPI (hereafter the not+NPI test set), matching one of the negpatterns of Table 1, and having at least one any, anybody, anyone, anything, anytime or anywhere within the licensing scope.
The scope of negation has been implemented through an approximation using dependency parses (from the Stanza parser (Qi et al., 2020)), which proved more convenient than phrasestructure parses: we took the subtree of the negated verb, excluding not itself, and excluding dependents corresponding to sentential or verbal conjuncts and to sentential parentheticals.
More precisely, we identified the token having not as dependent (which, given our patterns, can be either the negated verb or a predicative adjective in case of a negated copula). Then, we retrieved the children of this head, except those attached to it with a "conj", "parataxis", "mark" or "discourse" dependency. In the complete subtrees of the selected dependents, all tokens were annotated as being inside the negation scope. 6 We used a version with texts from 1990 to 2012. COCA is distributed with some tokens in some sentences voluntarily masked, varying across distributions. We ignored such sentences. 7 More precisely, the figures in Table 2 correspond to an upper bound, because of (i) potential syntactic parsing errors impacting the identification of the zones, (ii) cases in which the NPI licensor is different from the not targeted by the patterns, and (iii) cases in which any* is a free choice item rather than a NPI.We inspected 250 examples of any* in the PRE-IN zone, and 250 examples in the IN zone. In the former, we found that almost all cases fall under (i), (ii) or (iii), less than 3% corresponding to examples such as (4)). In contrast, in the IN zone the proportion of NPIs actually licensed by the target not is 92%.
Id Pattern
Example and zones 1/2 (VP (VB*/MD) ( RB not ) VP ) I have my taxi and I 'm not going anywhere but my brother will leave Spain because he has a degree.
(VP (VB*) ( RB not ) NP/PP/ADJP )
Since it is kind of this fairy-tale land, there aren't any rules of logic so you can do anything, she says.
5* (S ( RB not ) VP )
I went in early, not wanting anyone to see me and hoping for no line at the counter. Table 1: The "neg-patterns": patterns adapted from Jumelet and Hupkes (2018), which we used to identify some cases of not licensing a NPI and to build the not+NPI test set. Col1: pattern id in Jumelet and Hupkes (2018). Col2: syntactic pattern (defined as a phrase-structure subtree, using the Penn Treebank's annotation scheme), with the licensing scope appearing in blue. Col3: examples with colors for the four zones: pink for tokens in the PRE zone (before both scopes), purple for PRE-IN (to the left of the licensing scope, but within the negation scope), blue for IN (within both scopes) and green for POST (after both scopes). The NPI licensor is not, and appears in yellow. For the licensing scope, we parsed the corpus using the PTB-style parser "Supar Parser" 8 of , and further retained only the sentences (i) matching at least one of the negpattern of Table 1 and (ii) having a NPI within the licensing scope (IN zone, shown in blue in Table 1), resulting in the not+NPI test set, whose statistics are provided in Table 3.
Probing for the scopes
Our objective is to study how a transformerbased PLM (i) encodes the presence of a negation (the "traces" of negation) and (ii) models lexicosyntactic constraints imposed by negation, such as the modeling of a NPI licensing scope. Us-8 https://parser.yzhang.site/en/latest/index.html ing the terminology introduced in section 1, we probe whether input embeddings encode as target information (i) the presence of not elsewhere in the sentence, and (ii) the polarity of a masked PI. The former focuses on a plain encoding of negation, whereas the latter focuses on whether the encoding of negation can be mobilized to reflect a property (NPI licensing) that is directly imposed by negation. To investigate whether such an encoding matches linguistic notions of scopes, we contrast results depending on the zone the input token belongs to (among the four zones defined for not licensing a NPI, namely PRE, PRE-IN, IN, POST) and its distance to not.
We studied four PLMs : BERT-base-case, BERT-large-case (Devlin et al., 2019) and ROBERTA-base and ROBERTA-large . All our experiments were done with each of these models, and for a given model, each experiment was repeated three times. All the sentences we used for training, tuning and testing were extracted from the COCA corpus.
Probing for the negation scope
In preliminary experiments, we extended Celikkanat et al. (2020)'s study by investigating the traces of not in the contextual embedding of all the tokens of a sentence containing not (instead of just the verb, subject and object).
Training neg-classifiers
We trained binary classifiers (hereafter the m-negclassifiers, with m the name of the studied PLM) taking an input contextual embedding, and predicting the presence or absence of at least one not in the sentence. In all our experiments, the PLMs parameters were frozen. We trained 3 classifiers for each of the 4 tested PLMs. To train and evaluate these classifiers, we randomly extracted 40,000 sentences containing exactly one not, and 40,000 sentences not containing any not. These sentences were BERT-and ROBERTA-tokenized, and for each model, we randomly selected one token in each of these sentences to serve as input token. Among these input tokens, we ignored any token not, as well as all PLM tokens associated to a contracted negation: for instance don't is BERT-tokenized into don + ' + t, and ROBERTA-tokenized into don' + t. These tokens were ignored since they are too obvious a clue for the presence of a verbal negation. Furthermore, in order to homogenize the handling of negation whether contracted or not, we also set aside any modal or auxiliary that can form a negated contracted form. Hence, in She did leave, She did not leave or She didn't leave, the only candidate input tokens are those for She and leave 9 . We used 64k sentences for training (neg-train-sets), and the remaining 16k for testing (neg-test-set).
We provide the obtained accuracies on this negtest-set in Table 4, which shows that performance is largely above chance. We provide a more detailed analysis of the classifers performance in section 3.2.
Model BERT b BERT l ROB. b ROB. l Accur.
74.3 73.1 72.1 76.6 Table 4: Accuracies of the neg-classifiers on the neg-test-set for each PLM (averaged over 3 runs).
Studying results on the not+NPI test set
To probe the negation scope, we then used the not+NPI test set (cf. section 2), and compare accuracies in PRE-IN vs. PRE, and in IN vs. POST. Note though that distance to not is also likely to impact the classifiers' accuracy. Indeed, by definition the structural zones obviously correlate with distance to not. For instance, a token at distance 3 to the right of not is more likely to be in the licensing scope than a token at distance 20. Hence, to study the impact of the input token's zone, we needed to control for distance to the negation clue.
We thus broke down our classifiers' accuracy on the not+NPI test set, not only according to the input token's zone, but also according to its relative position to the negation cue. Table 5 shows an example of not+NPI sentence, and the zone and relative position to not of each token. The target not has position 0, and so do all the PLMs' subword tokens involved in the negation complex, and all preceding modal or auxiliary, to homogenize across PLMs and across contracted/plain negation. By construction, the PRE and PRE-IN zones correspond to negative positions, whereas IN and POST correspond to positive ones.
The break-down by position for ROBERTAlarge is shown in Figure 1 (results for other models are in appendix figure 4). Two effects can be observed, for all the 4 PLMs: firstly, there is a general decrease of the accuracy as moving away from not, for the four zones. This contrasts with the findings of Klafka and Ettinger (2020), who did not observe a distance effect in their experiments, when probing whether the contextual representation of e.g. a direct object encodes e.g. the animacy of the subject. The decrease is more rapid before not than after it, which remains to be explained. It might come from the negation scope being shorter before not than after it.
Secondly, when looking at fixed relative distances, there is a slight but consistent effect at almost all positions that the accuracy is higher when the input token is in the negation scope (either PRE-IN or IN), than when it is outside (PRE and POST) (the differences are statistically significant at p < 0.001, cf. Appendix B). This tendency is more marked for the PRE vs. PRE-IN distinction than for the POST vs. IN distinction.
This observation can be summarized by computing the average accuracy gap, namely the accuracy differences averaged across positions (the average of the purple minus pink bars, and of blue minus green bars in Figure 3), which provide an average difference when a token is within or outside the negation scope. The average accuracy gaps for the four tested models are given in Table 6. It confirms that input embeddings of tokens inside the negation scope do allow for a slightly better prediction of the presence of not than those outside the scope. Note that the average difference is stable across models, whose size does not seem to matter. It shows that the strength of the encoding of not in contextual representations matches the linguistic notion of negation scope. Table 5: Example sentence from the not+NPI test set: structural zones and relative positions to not. Any auxiliary or modal preceding the target not has position 0 too, to homogenize contracted and plain negation, and BERT versus ROBERTA's tokenization. BERT b BERT l ROB b ROB l 3.0 (0.6) 3.5 (0.2) 2.6 (0.2) 2.6 (1.3) Table 6: Accuracy gaps for the neg-classifiers on the not+NPI test set, for each tested PLM, averaged over 14 relative positions and 3 runs (stdev within brackets).
We also observed that the biggest difference is at position -1, which mostly corresponds to a contrast between a finite vs. non-finite negated verb (neg-patterns 1/2/3 vs. neg-pattern 5 in Table 1), which seems well reflected in PLMs' embeddings.
Probing for the licensing scope
We then focused on whether this encoding of not can actually be mobilized to capture the licens-ing of a NPI. We built classifiers (hereafter the mpol-classifiers 10 , m referring to the PLM), taking an input contextual embedding, and predicting as target information the polarity of a masked position, originally filled with a positive or negative PI. Importantly, the input embedding in the training set is randomly chosen in the sentence, and can correspond to a position with no a priori linguistic knowledge about the polarity of the PI (Figure 2).
We train on sentences originally having either a PPI or a NPI, which we mask before running each studied PLM. More precisely, in each COCA subcorpus (each genre), and for each of the 6 NPI/PPI pairs listed by Jumelet and Hupkes (2018) 11 , we randomly took at most 2,000 sentences containing the NPI, and the same amount of sentences containing the corresponding PPI 12 . In each of these, we masked the PI, randomly selected one token per sentence to serve as input token (excluding the masked position) and split these into 63,529 examples for training (pol-train-set) and 15,883 for testing (pol-test-set).
Model BERT b BERT l ROB. b ROB. l Accur.
64.2 63.7 56.6 68.6 Table 7: Accuracies of the pol-classifiers on the pol-test-set for each PLM (averaged over 3 runs).
Accuracies on the pol-test-set for each PLM are shown in Table 7. While still above chance, we observe that it doesn't exceed 69%, which is quite lower than the accuracies of the neg-classifiers (Table 4). This is not surprising since the task is more difficult. First, as stressed above, some of the training input tokens are independent, from the linguistic point of view, of the PI's polarity. Second, the cues for predicting the polarity are diverse. And third, in numerous contexts, both polarities are indeed possible, even though not equally likely. We did not control the training for this, on purpose not to introduce any additional bias in the data. We can thus interpret the polclassifier's scores as how likely a given polarity is.
Next, we applied these classifiers on the not+NPI test set. The objective is to compare the classifiers' accuracy depending on the structural zone the input token belongs to. If PLMs have a notion of licensing scope, then the polarity prediction should be higher when using an input token from the IN zone.
Results
Once more, we controlled for distance of the input embedding to not. The break-down by position and structural zone for ROBERTA-large is provided in Figure 3 (results for other models are in appendix figures 5).
Again, we observe a general accuracy decrease as moving away from not, even faster than for the previous experiment. The decrease is more rapid in the PRE-IN zone than in the IN zone (e.g. at distance -4 in PRE-IN, accuracy is less than 70%, whereas it is still above it at distance 8 in the IN zone), which could indicate that the traces of not are more robust in the licensing scope.
Secondly, as for the previous experiment, for each relative position, when the input token is in the negation scope (either PRE-IN or IN), the accuracy is higher than when it is outside (PRE and POST). Even though we cannot exclude that the relatively high overall accuracies may be explained by the classifier catching some regularities of the sentences containing a NPI rather than a PPI (independently of the presence of not), it remains that for the not+NPI sentences, accuracy is higher when the input token is in the negation scope than outside it. Moreover, this trend is much more marked than for the previous experiment.
Thirdly, the amplitude of this observation depends on the model. We provide the accuracy gaps for each PLM in Table 8. We observe that the trend is marked for ROBERTA-large and BERTbase (gap of 8.7 and 7.4 accuracy points, actually much higher than the accuracy gaps for predicting the presence of not), but lower for ROBERTAbase and BERT-large.
BERT b
BERT l ROB b ROB l 7.4 (0.5) 3.1 (0.4) 1.4 (0.2) 8.7 (0.6) Table 8: Accuracy gaps for the pol-classifiers on the not+NPI test set, averaged over 14 relative positions and 3 runs (stdev within brackets).
This leads us to conclude that (i) PLMs do encode structural constraints imposed by not (NPI licensing), but to varying degrees across the PLMs we tested, and (ii) that this encoding is stronger in the negation scope than outside it, independently of the distance to not. This only partially matches the linguistic expectation that the strongest zone should be the licensing scope rather than the entire negation scope.
Probing clause boundaries
We have seen that PLMs are able to encode negation scope, however this notion of scope often simply corresponds to the notion of syntactic clause. So it might be the case that PLMs are mainly sensitive to clause boundaries and that this sensitivity is the unique/main source of PLMs ability to encode negation scope. In this section we report a number of experiments designed to assess PLMs ability to encode clause boundaries in general.
We chose to use the same setting as the one we used with the neg-classifiers (section 3.1.1). Instead of using not as a target token, we chose various tokens with a similar number of occurrences, but other POSs: often, big, house, wrote. We trained classifiers to predict whether the target token is in the neighborhood of the input token. This time, the objective is to compare these classifiers' accuracies depending on whether the input token is or isn't in the same clause as the target token (instead of whether the input token is within or outside the negation scope). And just as we did for the neg-classifiers, we will control for distance to the target token by breaking down the accuracies according to the distance between the target and the input tokens.
Training the classifiers with alternative target tokens
To train such classifiers, we repeated the same protocol as for the neg-classifiers: for each target word often, big, house, wrote, we randomly selected a balanced number of sentences containing and not containing it, and we randomly picked an input token within each sentence, independently of the presence of the target token, and in case of presence, independently of the clause boundary of the target token. We then split the examples into training (25.5k) and test sets (6.5k). We restricted ourselves to a single PLM, ROBERTA-large. The performances on the training and test sets are provided at Table 9. We note that performance is comparable for all the four target tokens, and comparable to that of the neg-classifiers (cf. Table 10: Average accuracy when the input token is within a window of 8 tokens before and 8 tokens after the target token, broken down according to whether the input token is (In) or isn't (Out) in the same clause as the target token, and accuracy gap (In minus Out). The results are computed the study-test-set of each target word, using the classifiers trained on ROBERTA-large embeddings.
Studying results when input tokens are within or outside the same clause
In order to study whether PLMs do encode the notion of syntactic clause, we compared the classifiers' performance when the input token is or isn't within the same clause as the target token.
For each target word, we built a study-test-set of 40,000 COCA sentences containing it. We parsed these sentences, and annotated each of their tokens (1) according to their distance to the target token, and (2) as belonging or not the the same clause as the target token. 13 As in section 3.1.2, we now define accuracy gaps as the average difference between a classifier accuracy on input tokens that are within the same clause as the target token, minus the accuracy on input tokens from outside the clause. Table 10 shows the average accuracy gaps, for input tokens at distance at most 8 from the target token.
The results show that for the 4 tested target words, predicting the presence of the target token is better achieved using an input token from the same clause than from outside the clause. Interestingly, the gaps are higher when the target token is a noun, verb or adverb, and less pronounced for the adjectival target token. Strikingly, except for the adjective big, the observed accuracy gaps are even bigger than that obtained using not as target token (cf. 2.6 for ROBERTA l in Table 6). 14 This tends to indicate that the encoding of the negation scope observed in section 3.1 stems from a more general encoding clause boundaries.
Moreover, breaking-down the results by relative position to the target token (cf. figures 6 in Appendix), shows that the distance to the target token remains by far the most impactful factor.
Conclusion
In this paper, we studied the way negation and its scope are encoded in contextual representations of PLMs and to what extent this encoding is used to model NPI licensing.
We trained classifiers to predict the presence of not in a sentence given the contextual representation of a random input token. We also trained classifiers to predict the polarity of a masked polar item given the contextual representation of a random token. A test set of sentences was designed with not licensing an NPI, inside which we identified the negation scope , and the licensing scope.
For these sentences, we found that the contextual embeddings of tokens within the scope of a negation allow a better prediction of the presence of not. These embedding also allow a better prediction of the (negative) polarity of a masked PI. These results hold even when controlling for the distance to not. The amplitude of this trend though varies across the four PLMs we tested.
While this tends to indicate that PLMs do encode the notion of negation scope in English, and are able to further use it to capture a syntactic phenomena that depends on the presence of not (namely the licensing of a negative polarity item), further experiments tend to show that what is captured is the more general notion of clause boundary. Indeed, negation scope is closely related and often amounts to negation scope. Using alternative target tokens with varied parts-of-speech, we find that classifiers are better able to predict the presence of such target tokens when the input token is within the same syntactic clause than when it is outside from it. These results lead us to conclude that knowledge of the negation scope might simply be a special case of knowledge of clause boundaries. Moreover, distance to the target token is way stronger a factor than the "being in the same clause" factor. We leave for further work the study of other factors, such as the POS of the input token, as well as the study of the differences in amplitudes observed between the PLMs we tested. 549 pages 4046-4053, Yokohama, Japan. International Joint Conferences on Artificial Intelligence Organization.
A Hyperparameter tuning for the neg-classifiers and the pol-classifiers
The PLMs' contextual representations were obtained using a GeForce RTX 2080 Ti GPU. The neg-classifiers, the pol-classifiers and the classifiers used to predict the presence of other taget tokens were trained on a CPU, each training taking about 15 minutes. Then, testing them on the not+NPI test set took about 5 minutes.
To tune these classifiers, we performed a grid search with: a number of hidden layers included in [1, 2], number of units in each layer in [20, 50, 100 450, 1000], and the learning rate in [1, 0.1, 0.01, 0.001].
We selected a learning rate of 0.001, 2 hidden layers, with size 450 each, based on the accuracies on the neg-test-set and the pol-test-set. Except when the learning rate equaled 1, all hyperparameter combinations resulted in similar performance (less than 1 point of accuracy, in the results of figure 3).
The code and methodology was developed first using the BERT-base model, and then applied to the other models. Including code and methodology development, we estimate that the experiments reported in this paper correspond to a total of 160 hours of GPU computing.
B Statistical significance test
In this section we detail the test performed to assess the statistical significance of the accuracy differences illustrated in Figures 3 and 5.
For each of the four tested PLMs, and for each of 3 runs of classifier training,
• for each position from -8 to -1 relative to the not,
we compare the accuracy of the polclassifier in the PRE-IN zone versus in the PRE zone (i.e. the difference between the purple bar with respect to the pink one). namely, we test the statistical significance of the following positive difference : accuracy for tokens in PRE-IN zone minus accuracy for tokens in the PRE zone.
• for each position from 3 to 8,
we test the statistical significance of the following positive difference : accuracy for tokens in IN zone minus accuracy for tokens in the POST zone (i.e. the difference between the blue bar with respect to the green one)
Each test is an approximate Fisher-Pitman permutation test (with 5000 random permutations, performed using the script of Dror et al. (2018), https://github.com/rtmdrr/ testSignificanceNLP.git), and all the differences listed above result as statistically significant at p < 0.001. Figure 6: Accuracy (average on 3 runs) on trace identification tasks. The target tokens are big and house, and the probed embeddings are from a ROBERTA-large LM. Results are broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. The bar differences at each position and run are statistically significant at p < 0.001 (cf. Appendix B).
C Supplementary figures
Figure 1 :
1Accuracy of the ROBERTA-large-neg-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at p < 0.001 (cf. Appendix B). Figures for the other 3 models are provided in appendix figure 4.
Figure 2 :
2Illustration of the training of the polclassifiers.
Figure 3 :
3Accuracy of the ROBERTA-large-pol-classifier (average on 3 runs) on the not+NPI test set, broken down by zone (colors of the bars) and by relative position to not (horizontal axis). Further distances are omitted for clarity. No licensing scope contains less than 2 tokens, hence positions 1 and 2 are always in the IN zone. The bar differences at each position and run are statistically significant at p < 0.001 (cf. appendix figures 5).
The break-downs by position for the three models not presented in the main text (BERT-base, BERTlarge and ROBERTA-base) are provided in Figures 4 (neg-classifiers) and 5 (pol-classifiers).The break-downs by position for other target tokens are provided in
Table 2 :
2Number of sentences from the COCA corpus, matching the neg-patterns of Table 1: Col1: total number, Col2-4: number having any* in the IN zone, the PRE-IN zone, and in both zones respectively.with not
2,285,000
→ with NPI
143,000
→ pattern 1
30,896
→ pattern 3
2,529
→ pattern 5
1,020
→ pattern 6
< 100
Table 3 :
3Statistics of the not+NPI test set: number of COCA sentences matching the neg-patterns (cf. Table 1), and having at least one any* in the IN zone (licensing scope).
Table 4 ,
476.6
For instance in Kim did not go to the party because Bob was there., negation may scope only over the matrix clause or include the causal subordinate clause.4 In all this article, not stands for either not or n't.5 We ignored pattern 4 (never instead of not as licensor), and 6 (too few occurrences in our data). We merged patterns 1 and 2, and corrected an obvious minor error in pattern 5.
COCA sentences are tokenized and tagged. We detokenized them before BERT/ROBERTA tokenization, in order to get closer to a standard input.
Full details for all classifiers are provided in Appendix A. 11 (any/some)(∅/where/one/body/thing/time)
For any/some(∅/one/thing), we took 2 × 2000 occurrences. For any/some(body/time/where), less occurrences were available in some of the subcorpora. We took as many as possible, but keeping a strict balance between NPI and PPI sentences (between 2 × 169 and 2 × 958 depending on the corpus genre and on the NPI/PPI pair).
We identified the clause of the target token as the subtree of the head verb of the target token, in the dependency parse.14 The gaps are not strictly comparable though, due for our defining the negation scope as a subset of the clause, filtering out sentential conjuncts and sentential parenthetical, cf. section 2.3.
AcknowledgementsWe thank the reviewers for their valuable comments. This research was partially funded by the Labex EFL (ANR-10-LABX-0083).
Controlling the Imprint of Passivization and Negation in Contextualized Representations. Hande Celikkanat, Sami Virpioja, Jörg Tiedemann, Marianna Apidianaki, 10.18653/v1/2020.blackboxnlp-1.13Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLP. the Third BlackboxNLP Workshop on Analyzing and Interpreting Neural Networks for NLPAssociation for Computational LinguisticsOnlineHande Celikkanat, Sami Virpioja, Jörg Tiedemann, and Marianna Apidianaki. 2020. Controlling the Im- print of Passivization and Negation in Contextual- ized Representations. In Proceedings of the Third BlackboxNLP Workshop on Analyzing and Interpret- ing Neural Networks for NLP, pages 136-148, On- line. Association for Computational Linguistics.
Corpus of Contemporary American English (COCA). Mark Davies, 10.7910/DVN/AMUDUWMark Davies. 2015. Corpus of Contemporary Ameri- can English (COCA).
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
The hitchhiker's guide to testing statistical significance in natural language processing. Rotem Dror, Gili Baumer, Segev Shlomov, Roi Reichart, 10.18653/v1/P18-1128Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics. the 56th Annual Meeting of the Association for Computational LinguisticsMelbourne, AustraliaAssociation for Computational Linguistics1Long Papers)Rotem Dror, Gili Baumer, Segev Shlomov, and Roi Re- ichart. 2018. The hitchhiker's guide to testing statis- tical significance in natural language processing. In Proceedings of the 56th Annual Meeting of the As- sociation for Computational Linguistics (Volume 1: Long Papers), pages 1383-1392, Melbourne, Aus- tralia. Association for Computational Linguistics.
Allyson Ettinger, 10.1162/tacl_a_00298What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Transactions of the Association for Computational Linguistics. 8Allyson Ettinger. 2020. What BERT Is Not: Lessons from a New Suite of Psycholinguistic Diagnostics for Language Models. Transactions of the Associa- tion for Computational Linguistics, 8:34-48.
Context matters: A pragmatic study of PLMs' negation understanding. Reto Gubelmann, Siegfried Handschuh, 10.18653/v1/2022.acl-long.315Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandLong Papers1Association for Computational LinguisticsReto Gubelmann and Siegfried Handschuh. 2022. Context matters: A pragmatic study of PLMs' nega- tion understanding. In Proceedings of the 60th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 4602- 4621, Dublin, Ireland. Association for Computa- tional Linguistics.
Vincent Homer, 10.1002/9781118788516.sem057Negative Polarity. John Wiley & Sons, LtdVincent Homer. 2020. Negative Polarity, pages 1-39. John Wiley & Sons, Ltd.
Do Language Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items. Jaap Jumelet, Dieuwke Hupkes, 10.18653/v1/W18-5424Proceedings of the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLP. the 2018 EMNLP Workshop Black-boxNLP: Analyzing and Interpreting Neural Networks for NLPBrussels, BelgiumAssociation for Computational LinguisticsJaap Jumelet and Dieuwke Hupkes. 2018. Do Lan- guage Models Understand Anything? On the Ability of LSTMs to Understand Negative Polarity Items. In Proceedings of the 2018 EMNLP Workshop Black- boxNLP: Analyzing and Interpreting Neural Net- works for NLP, pages 222-231, Brussels, Belgium. Association for Computational Linguistics.
Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly. Nora Kassner, Hinrich Schütze, 10.18653/v1/2020.acl-main.698Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAssociation for Computational LinguisticsOnlineNora Kassner and Hinrich Schütze. 2020. Negated and Misprimed Probes for Pretrained Language Models: Birds Can Talk, But Cannot Fly. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 7811-7818, On- line. Association for Computational Linguistics.
Spying on Your Neighbors: Fine-grained Probing of Contextual Embeddings for Information about Surrounding Words. Josef Klafka, Allyson Ettinger, 10.18653/v1/2020.acl-main.434Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsJosef Klafka and Allyson Ettinger. 2020. Spying on Your Neighbors: Fine-grained Probing of Contex- tual Embeddings for Information about Surrounding Words. In Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics, pages 4801-4811, Online. Association for Compu- tational Linguistics.
How distributed are distributed representations? an observation on the locality of syntactic information in verb agreement tasks. Bingzhi Li, Guillaume Wisniewski, Benoit Crabbé, 10.18653/v1/2022.acl-short.54Proceedings of the 60th Annual Meeting of the Association for Computational Linguistics. the 60th Annual Meeting of the Association for Computational LinguisticsDublin, IrelandAssociation for Computational Linguistics2Short Papers)Bingzhi Li, Guillaume Wisniewski, and Benoit Crabbé. 2022. How distributed are distributed representa- tions? an observation on the locality of syntactic information in verb agreement tasks. In Proceed- ings of the 60th Annual Meeting of the Association for Computational Linguistics (Volume 2: Short Pa- pers), pages 501-507, Dublin, Ireland. Association for Computational Linguistics.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, 10.48550/ARXIV.1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv.
Shared Task: Resolving the Scope and Focus of Negation. Roser Morante, Eduardo Blanco, ; * Sem, *SEM 2012: The First Joint Conference on Lexical and Computational Semantics. Montréal, CanadaAssociation for Computational Linguistics1Proceedings of the Sixth International Workshop on Semantic EvaluationRoser Morante and Eduardo Blanco. 2012. *SEM 2012 Shared Task: Resolving the Scope and Fo- cus of Negation. In *SEM 2012: The First Joint Conference on Lexical and Computational Seman- tics -Volume 1: Proceedings of the main conference and the shared task, and Volume 2: Proceedings of the Sixth International Workshop on Semantic Eval- uation (SemEval 2012), pages 265-274, Montréal, Canada. Association for Computational Linguistics.
Stanza: A Python natural language processing toolkit for many human languages. Peng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, Christopher D Manning, Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics: System Demonstrations. the 58th Annual Meeting of the Association for Computational Linguistics: System DemonstrationsPeng Qi, Yuhao Zhang, Yuhui Zhang, Jason Bolton, and Christopher D. Manning. 2020. Stanza: A Python natural language processing toolkit for many human languages. In Proceedings of the 58th An- nual Meeting of the Association for Computational Linguistics: System Demonstrations.
Licensing of negative polarity items under inverse scope. Swart Henriëtte De, 10.1016/S0024-3841(98)00021-7Lingua. 1053-4Henriëtte de Swart. 1998. Licensing of negative po- larity items under inverse scope. Lingua, 105(3- 4):175-200.
Investigating BERT's Knowledge of Language: Five Analysis Methods with NPIs. Alex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Hagen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Paloma Phu Mon Htut, Samuel R Jeretic, Bowman, 10.18653/v1/D19-1286Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsAlex Warstadt, Yu Cao, Ioana Grosu, Wei Peng, Ha- gen Blix, Yining Nie, Anna Alsop, Shikha Bordia, Haokun Liu, Alicia Parrish, Sheng-Fu Wang, Jason Phang, Anhad Mohananey, Phu Mon Htut, Paloma Jeretic, and Samuel R. Bowman. 2019. Investigat- ing BERT's Knowledge of Language: Five Anal- ysis Methods with NPIs. In Proceedings of the 2019 Conference on Empirical Methods in Natu- ral Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 2877-2887, Hong Kong, China. Association for Computational Linguistics.
Fast and Accurate Neural CRF Constituency Parsing. Yu Zhang, Houquan Zhou, Zhenghua Li, 10.24963/ijcai.2020/560Proceedings of the Twenty-Ninth International Joint Conference on Artificial Intelligence. the Twenty-Ninth International Joint Conference on Artificial IntelligenceYu Zhang, Houquan Zhou, and Zhenghua Li. 2020. Fast and Accurate Neural CRF Constituency Pars- ing. In Proceedings of the Twenty-Ninth Interna- tional Joint Conference on Artificial Intelligence, |
252,819,393 | How to Parse a Creole: When Martinican Creole Meets French | We investigate methods to develop a parser for Martinican Creole, a highly under-resourced language, using a French treebank. We compare transfer learning and multi-task learning models and examine different input features and strategies to handle the massive size imbalance between the treebanks. Surprisingly, we find that a simple concatenated (French + Martinican Creole) baseline yields optimal results even though it has access to only 80 Martinican Creole sentences. POS embeddings work better than lexical ones, but they suffer from negative transfer.2. In a transfer learning setting, how do we best | [
209202658,
10442626,
10786404,
19826970,
9865708,
236478365,
232621391,
219307246,
8525137,
29080401,
4932628,
222133301,
202758970,
204801691,
7942973,
236486110,
215755871
] | How to Parse a Creole: When Martinican Creole Meets French
October 12-17, 2022
Ludovic Mompelat lmompela@iu.edu
Indiana University
Indiana University
Indiana University
Daniel Dakota ddakota@iu.edu
Indiana University
Indiana University
Indiana University
Sandra Kübler skuebler@indiana.edu
Indiana University
Indiana University
Indiana University
How to Parse a Creole: When Martinican Creole Meets French
Proceedings of the 29th International Conference on Computational Linguistic
the 29th International Conference on Computational Linguistic4397October 12-17, 2022
We investigate methods to develop a parser for Martinican Creole, a highly under-resourced language, using a French treebank. We compare transfer learning and multi-task learning models and examine different input features and strategies to handle the massive size imbalance between the treebanks. Surprisingly, we find that a simple concatenated (French + Martinican Creole) baseline yields optimal results even though it has access to only 80 Martinican Creole sentences. POS embeddings work better than lexical ones, but they suffer from negative transfer.2. In a transfer learning setting, how do we best
Introduction
Syntactic analysis is an essential task for language documentation and language revitalization, as it allows a deeper understanding of languages. Underresourced languages often suffer from the lack of annotated gold standard data available to develop and offer NLP solutions for the communities speaking the language. Moreover, there is a low number of researchers trained in formal linguistics and/or linguistic annotations which causes additional challenges in the creation of language resources for such languages. In recent years, research on parsing has developed a focus on underresourced languages (Agić et al., 2016;Vania et al., 2019;Meechan-Maddon and Nivre, 2019), but creoles have received less attention.
In this study, we develop a dependency parsing model for Martinican Creole (MC), a French-based Creole, mostly spoken in Martinique, a French island in the Caribbean. Being a French territory, French and MC coexist in an unbalanced manner. The diglossic situation makes French the dominant language in many contexts, although in the past decades Martinican Creole has seen an expansion of its communicative contexts (Bernabé, 2004;Véronique, 2020). This is due to the codification and standardization processes the language underwent, especially by the GEREC (1982), and to the linguistic policies developed in an effort to safeguard and revitalize this language. However, one aspect of this revitalization process that is currently missing is the expansion of NLP tools and resources for MC (and other creole languages). Creole languages based on the same lexifier language (in our case French) are extremely similar and in many cases, mutually intelligible. Thus, developing NLP solutions for one creole language provides a basis to transfer knowledge to other related Creole languages.
The goal of this project is to investigate the best methods for developing a parser for an extremely low resource language when this language is a creole language. In our case, the creole is Martinican Creole. The main question here is whether the lexifier language, i.e., French, is similar enough to serve as basis for training a parser without further modification.
Research Questions
Our overarching research question is the following: Can we leverage a French treebank using transfer learning or multi-task learning approaches to create a parser model for Martinican Creole given that we only have a very small treebank ? Can we leverage the similarity between the creole and its lexifier language, French?
To answer this question, we need to answer the following questions:
1. Which types of embeddings can be used? Given the differences in spelling, are character embeddings, POS embeddings, or BERT embeddings the best representation of the input sentence? (We will not consider multilingual BERT models since the closest language is French, and we have access to large French embeddings models.) use the very limited Martinican Creole data? Is it worth the effort to annotate data for optimizing the parser, or can we optimize it on French? Is there enough structural and lexical similarity between French and the creole to make this possible?
3. In a transfer learning setting, how do we deal with the extreme imbalance between the large French Treebank and the small Martinican Creole Treebank? Can we prevent the parser from overfitting?
4. Can we leverage a multi-task learning model to handle the imbalance between French and the creole? More specifically, will loss weighting be able to counterbalance the treebank sizes?
5. Can we determine the linguistic characteristics of Martinican Creole that provide challenges to parsers based on standard transfer learning and on multi-task learning?
Related Work
Creoles are still under-researched in NLP. Noticeable work includes language model comparisons by Lent et al. (2021) between Haitian Creole, Nigerian Pidgin English, and Singaporean Colloquial English, trained with empirical risk minimization, against language models with distributionally robust ones, finding that the former performed better for Creoles. One reason postulated may be the absence of drift due to the relative stability of creoles. Regarding French-based creoles, Haitian Creole was the subject of an extensive collaboration in Machine Translation led by Microsoft Research (Lewis, 2010) following the 2010 earthquake. Millour and Fort (2018) led a project of crowdsourcing of POS tags for Guadelupean Creole in which they describe the necessary steps and methodology to crowdsource a language for POS tagging. They were able to collect a corpus of nearly 2,500 tokens POS tagged and create a POS tagger reaching 84% accuracy.
The lack of available creole treebanks, with Nigerian Pidgin English (Caron et al., 2019) the only publicly available Universal Dependency treebank, means that best parsing strategies for Creoles are still being developed. Given the lack of available data, parsing creoles can be viewed as similar to the need to leverage related treebanks to try and increase performance on the target treebank. A common approach is to concatenate available treebanks and optimize towards the target treebank. This has demonstrated gains in both monolingual (Björkelund et al., 2017;Velldal et al., 2017) and cross-lingual (Das et al., 2017) experiments. Another successful technique is to instead train a model on a source treebank and then fine-tune on the target treebank (Shi et al., 2017;Che et al., 2017).
The most directly related works to ours are Wang et al. ( , 2019 since they parse Singlish, an English-based Creole, by leveraging its lexifier language, English, to boost performance. propose a neural stacking architecture which yielded promising results which were further investigated by Wang et al. (2019). They tripled the size of their original Singlish treebank by web scraping and annotating more data and performed additional multi-task experiments for integrating English syntactic knowledge. While multi-task models showed some success, neural stacking methods were still better, as was simply concatenating English and Singlish treebanks in some experiments. Such neural stacking architectures with additional POS information also have helped in the related task of parsing Hindi-English Code-switching data (Bhat et al., 2018). As far as we know, we are the first to approach the task of dependency parsing a Frenchbased Creole.
Properties of Martinican Creole
Martinican Creole (MC) is a French-based creole and part of the Atlantic Creoles language family. Syntactically, MC is an SVO language and is closely related to French, other creoles such as Guadeloupean, Marie Galante, St. Barth, Saint Lucian Creoles, and Haitian Creole, and to a lesser degree to African languages. The differences between MC and the closely related Antillean creoles are mostly lexical, they share very similar syntactic structures.
While MC originates from French, both languages show noticeable syntactic differences, especially wrt. the word order in noun phrases.
(1) Zanmi-mwen friends-my enmen like liv-la book-the (MC) Mes amis aiment le livre (French) "My friends like the book" Example (1) shows a sentence in Martinican Cre-ole. It demonstrates that determiners like -la and modifying pronouns like -mwen are post-posed, compared to their French and English counterparts mes (my) and le (the).
Despite these differences in morpheme order, it is still relatively easy to see the direct parallels between both languages. This makes French a good candidate for a transfer learning approach to parsing MC.
MC is considered a morphologically reduced language (Hazaël-Massieux, 2002): Tense, mood and aspect features are expressed as separate morphemes/markers instead of inflections on the verbal element. There is also no morphological gender/number marking on nouns and adjective.
(2) A ce qu'il parait, ils vendraient leurs propres frères et leurs soeurs. "Apparently, they would sell their own siblings"
In example (2), we see that the conditional is expressed in MC by a morpheme combination of the Past/Perfective marker té and the Future/Irrealis marker ké whereas in French, the conditional is expressed synthetically by the affix -raient attached to the end of the verb vendre. We also see that general plural nouns like frè-yo and sè-yo are not morphologically marked in MC, and neither is their accompanying adjective prop, whereas in French frères and soeurs and propres are all morphologically marked for gender and number.
Finally, while MC uses a different spelling system from French, the MC pronunciation is much closer to its spelling than in French. MC acquired most of its lexicon from French. Lexical transfer was either phonetically transparent or underwent reanalysis via several phono-lexical processes (such as agglutination (see example (3)), apheresis (see example (4)), syncope (see example (5)), etc.).
( In both cases, while the lexical transfer can easily be identified at the phonetic level, it is a more difficult to identify at the orthographic level, since there are significant differences in the respective spelling systems. Because of the amount of differences, it is possible that French embeddings may not be useful, since there may not be enough lexical overlap between French and MC, even on the subword level.
3) Agglutination diri [di.Ki] (MC) du riz [dy.Ki] (French)(
Methodology
Treebanks
French Treebank For our source treebank, we use the French GSD treebank (Guillaume et al., 2019) 1 as it is sufficiently large in size and predominantly consists of news articles, which aligns better with the newly created MC treebank.
MC Treebank
The MC treebank consists of news and blog articles written in Martinican Creole by native speakers. Texts range from 2004 to 2021 and consist of two primary sources: 1) Kréyolad 2 collections which gather all the article contributions of Jude Duranty to the newspaper Antilla 3 from 2004 to 2018 and 2) the collective blog Montray Kréyol 4 which contains columns from numerous authors, written in French and various (mostly French-based) creoles. Selected text were annotated by the first author. The fully annotated treebank of MC consists of 240 sentences and a total of 4809 tokens. 5
Annotation of MC Treebank We tokenized the texts using NLTK Tokenizer 6 and then annotated for POS information using INCePTION (Klie et al., 2018). INCePTION proposes an automatic POS tagger training on the annotations one makes synchronously and retrains itself whenever a new word receives a tag. We then used UD Annotatrix (Tyers Treebank Train Dev Test FR-GSD 13 072 1 634 1 634 MC 80 80 80 et al., 2017) for the dependency annotations. The treebank is not annotated for lemmas or morphological information.
Experimental Setup
Data Splits Due to the small size of the MC corpus, we split the treebank into equal size folds for train, dev, and test of 80 sentences. For more generalized results, we generate three different randomized splits and report results averaged over the three runs. For the French GSD treebank, we use the standard train/dev/test split, unless otherwise noted. Table 1 shows the sizes of the different data sets.
Parser We use the Deep Biaffine parser (Dozat and Manning, 2017) implemented in the SuPar parsing library. 7 The parser is a neural graph-based dependency parser which uses biaffine attention and biaffine classifier in combination with dimension reducing MLP layers to reduce non-relevant information.
We experiment with different input embeddings: character, POS tag, and BERT embeddings. Note that SuPar always includes word embeddings, so that we can only use (word+)POS and (word+)BERT. For all POS embeddings, we use gold POS tags. For the BERT embeddings, we use the French camemBERT (Martin et al., 2020) 8 .
In addition, we also use a multi-task learning parser where each treebank is treated as a separate task (Sayyed and . Both input embeddings into the BiLSTM and the subsequent MLP layers are shared, which allows for information transfer during joint optimization between the treebanks. We also experiment with weighting treebanks with respect to their joint loss contribution, which has shown to be beneficial when data imbalances exist between treebanks , as in our case. Results reported are using the scorer from CoNLL2018 shared task (Zeman et al., 2018). 6 Results
Baselines
We first need to establish the baselines, i.e., training on the French training set, training on the Martinican Creole training set, and concatenating these two. Here, we optimize and test on the MC dev set. Table 2 shows the results for these baseline models. These results show that the French training data gives us the lowest results. The best model, using POS embeddings, results in an LAS of 51.95. Using character and BERT embeddings results in considerable losses (LAS: 11.73 and 21.63); this can be attributed to the significant differences in spelling between French and MC (see section 4). Training on 80 MC sentences is surprisingly successful. Again, using the POS embeddings shows the best results (LAS: 62.86). It is worth noting how beneficial the use of POS embeddings is for MC compared to subword information. One reason is simply the small data size of the MC treebank; another reason may be that some of the linguistic properties of MC are disambiguated via POS tags but not via characters. However, the concatenation of both training sets results in the highest scores overall, with an LAS of 71.77 for POS embeddings. This is particularly interesting given that the French training size is about 136 times the size of the MC training but this small amount is enough to direct the French-trained model in a beneficial direction.
Optimization
Since we operate in a very low-resource setting, the next question is whether it is worth annotating sentences to use for optimizing the parser or whether the neural architecture does not require target language specific optimization. Thus, we compare a setting trained and optimized on French with a setting where we train on French and optimize on 80 MC sentences. The results are shown in Table 3. 9 .
Our results show increases when the source French model has been optimized on MC as compared to French. This is true for all types of embeddings. The POS model shows a sizable improvement of more than 6 percent points for LAS while the improvements for the character and BERT models are more modest, around 2-3 percent points. However, we do not reach the MC baseline in any setting.
Fine-tuning
We next experiment with transfer learning in order to see if we can improve on the French baseline by fine-tuning on the MC training set. Given the difference in size, the MC data should not have a noticeable effect, but since the concatenation baseline proved so successful, we need to determine whether fine-tuning on the MC training has the 9 Note that the results of the French model optimized on MC are repeated from Table 2. same effect. When training on French, we have two settings: We either optimize on French or on MC. When fine-tuning on MC, we optimize on MC. Table 4 shows the results of these experiments. Note that the results without fine-tuning are repeated from Table 3. The results show very clearly that fine-tuning is only successful when the first stage is optimized on MC. If we optimize that stage on French, fine-tuning does not result in any improvement. This is likely due to the fact that training a fully optimized French model results in overfitting, which in turns does not allow the little MC data to effectively update the parameters.
When optimizing on MC, we note that all models show a drastic improvement in performance, especially for the BERT and character embeddings. Out of the three types of embeddings, the model using character embeddings benefits the most from finetuning, improving from 25.05 to 64.71 for UAS and from 11.73 to 46.83 for LAS, followed by the BERT embeddings model going from 38.23 to 67.10 for UAS and 21.63 to 49.41 for LAS. The most successful model, using POS embeddings, reaches an LAS of 60.83. While this is still below the concatenation model, it shows again the usefulness of POS embeddings.
Overfitting
One reason for the lack of improvement of the model optimized on French in Table 4 may be that training a fully optimized French model results in overfitting, which in turn does not allow the little MC data to effectively update the parameters. To investigate the issue of overfitting, we perform experiments where we stop the training early. Since it is unclear how to determine good stopping points, we stopped the training at epoch 1 as well as at the 1/4, 1/2, and 3/4 of the optimal number of epochs when using the French model (trained and optimized on French) and perform fine-tuning experiments in two settings, fine-tuning the model on MC, and on the concatenated FR+MC treebank. In both cases, we optimize on MC. The results of these experiments are shown in Table 5.
When comparing between the two fine-tuning settings, we note that none of the 1/4, 1/2, 3/4 or the fully optimized models improve from the MC or MC+FR data during fine-tuning, as results are not substantially different from the ones without fine-tuning. This indicates that the more a model is trained and optimized on French, the less it is able to profit from having access to MC data. The only models showing noticeable benefit from finetuning are the epoch 1 character model fine-tuned on MC and the epoch 1 character and POS models fine-tuned on FR+MC, but both are still below their respective baselines. When we look at the experiments with fewer epochs, we see a deterioration of the results from fewer epochs to the full number of epochs, showing clear signs of overfitting. This trend holds across all conditions but is strongest for the highest performing model using POS embeddings. Here the LAS decreases from 49.81 to 45.54. However, even the results at 1/4 epochs are far below the MC baseline.
Multi-task Learning
Another approach for information sharing is to use multi-task learning (MTL). By treating each treebank as a task, it allows them to be optimized jointly but does so by combining information with the other treebank in the process rather than sequentially as in a typical transfer learning setup. For this experiment, we have two settings, one without weighting losses, and one with loss weighting. Reducing the weights for the smaller treebank may help reduce the negative transfer that can occur. Given the small size of the MC training set, its contribution to the overall loss may be too high, leading the parser in a sub-optimal direction. This assumption has been shown to hold for a domain adapta-tion setting , where assigning higher weights to the larger and lower weights to the smaller treebank yielded the best performance. Consequently, we assign a loss of 0.9 to the French treebank and 0.1 to the MC treebank.
The results of this experiment are shown in Table 6. Results are generally better than for the fine-tuning setting. However, the best result so far is still the baseline trained on only 80 sentences of MC and using POS (see Table 2), as none of the MTL settings reach this result. When we compare the weighted and non-weighted settings, we see an improvement of about 1.5 points (LAS: from 56.58 to 58.26) for the character model and a minimal gain for the BERT model (LAS: from 56.33 to 56.78), but a small decrease for the POS model. It is noticeable that using POS information leads to substantially worse results in comparison to the other models, thus contradicting the trends of previous experiments. This further re-enforces the notion of negative transfer when sharing POS information.
We next look at a setup where we use the FR+MC concatenated treebank as one of our tasks and the MC treebank as the other, with both optimizing on the same development set, but using different weights.f Table 7 shows the results of this experiment (the FR/MC setting is repeated from Table 6). We see that using the combined FR+MC training set gives us a moderate boost of 2-3 percent points over the FR/MC setting. Here, the UAS improves over the best MC-only baseline, the LAS does not. Additionally, we can see that even further reducing the MC weights tends to yield better performance for LAS, suggesting that as the data imbalance becomes extreme, so does the need to downweight the smaller treebank.
Analysis
All the experiments described above tell us that the best method to parse Martinican Creole given a very small treebank is to concatenate the two treebanks. It is unclear why first training on French and then fine-tuning on MC does not result in a similar performance. And it is equally unclear why the POS embeddings are successful in transfer learning but not in a multi-task learning setting. We assume that the two are related and will analyze the data to shed light on these questions.
Correlation of POS Tags and Parser Errors
We first look into the correlation between specific part of speech tags and parser errors. More specifically we look at label accuracy of incoming arcs per POS tag, with a primary focus on the experiments that include the concatenated FR+MC data during training. Table 8 presents the results 10 for the FR+MC embeddings baselines and their best respective MTL settings from Table 7. We see the same trends across most open and closed class POS tags within one setting. Since the lexical models (char and BERT) show significantly lower results than the POS model, this points to a disconnect on the lexical level (caused by the different spelling systems) that can only be overcome by adding POS infor-10 All numbers are averaged over the three folds. mation. However, in this case, we would expect a better performance of the POS model in our MTL task in comparison to the MC baseline. Since this does not happen, we assume that there are significant differences on the POS level between the two languages, causing negative transfer for the MTL model (one facet of this will be investigated in more detail in the next section.)
One notable trend is related to the accuracies for adjectives and adverbs: While the baseline POS model can parse those POS very successfully, the MTL POS model reaches accuracies that are below the MTL character and BERT models for adjectives and comparable for adverbs (again, see below for an explanation).
POS Distribution
We now have a closer look at the POS distributions between French and MC, to determine whether these ambiguity rates can give us insights into the differences between French and MC on the POS level. However, a direct comparison does not seem to be feasible since the MC treebank is too small to give us a stable picture, especially compared to the large French treebank. For this reason, we decided to use the full 240 sentences of the MC treebank and to randomly sample 240 sentences from the French treebank (averaged over 10 repetitions). While the small number will introduce some variability, the results will be more comparable across the languages.
When looking at the percentage of ambiguous words, 2.2% of the word types (in the POS lexicon) for French and 7.0% for MC are ambiguous, showing that about 3 times more MC words are ambiguous. Additionally, the percentage of ambiguous word types amounts to 13.0% when we concatenate the French and MC treebanks. Table 9 shows the rates of ambiguous word types per POS tag. A comparison of French and MC shows that for all POS tags, the MC words are ambiguous about 3 times more often. And while French subordinating conjunctions (SConJ) and prepositions (Adp) tend to be frequently ambiguous, this ratio increases to more than 50% for MC. Additionally, the percentages for the combined treebank shows that the ambiguities are mostly additive, i.e., there is not much overlap between the ambiguous words in French and MC. This at least partly explains the difficulties of the POS models. The most extreme cases are subordinating conjunctions and prepositions, for which more than 50% are ambiguous in MC and more than 60% in the combined treebank. For the open class POS tags, adjectives and adverbs are the most affected. In the case of adjectives, the combined treebank shows a doubling of the ambiguity rate from MC to the combined treebank, thus indicating not only an increase in ambiguity in MC, but also a high number of words that are only considered adjectives in one of the languages but not both. This partly explains the low results for adjectives in the MTL POS setting in Table 8.
Example
Martinican creole shows a systematic ambiguity between nouns and adjectives. The word politik is one example, as shown in examples (6) and (7). In example (6) the word is misidentified as a noun, which leads the character model to interpret it as an nmod of désizion instead of its amod (see Figure 1). Having access to the gold POS tags in the POS model helps this model disambiguate it correctly.
Conclusion and Future Work
In this study, we built a first parser of Martinican Creole using French as the supporting language to address the extremely low-resource setting of the creole.
Our main finding is that, surprisingly, we obtain the best parsing results with our baseline model trained on a concatenation of the French and MC training sets. The success of the concatenated baseline model shows that even with as little as 80 MC sentences in the training set, the POS model is able to direct itself in the right direction.
Even the baseline POS model trained on 80 MC sentences outperforms all transfer and MTL models, the single exception being the UAS of the MTL character and BERT models. Partial explanations for these results can be found in the different spelling systems used for French and MC (see Section 4) and in the high level of ambiguity of MC, and specifically MC adjectives and adverbs. Whether POS tags are needed in neural dependency parsing is still an open question (Anderson and Gómez-Rodríguez, 2020;Zhou et al., 2020), and our findings further complicate this picture. In our case, they can reduce ambiguity in our baselines, but increase ambiguity across the two languages. Since we use gold POS tags, there remains the open question whether the same effects will occur with automatically annotated POS tags.
Our results partially contradict findings by Wang et al. (2019) for Singlish. They also found that in the low resource setting (using 900 Singlish sentences), treebank concatenation outperforms MTL. However in their work, MTL outperformed the baselines for both individual treebanks while we did not see this increase in performance across experiments. Our findings thus confirm that to improve our performances on parsing MC using French, we will need to reduce the imbalance between the two languages, by augmenting the MC data. For the future, we are planning to investi- Figure 1: zot wè ni an désizion politik parses for POS and char predictions.
gate whether a larger MC training set will have a positive effect in the MTL setup. However, the fact that as little as 240 annotated sentences, provided that we concatenate them with French data, can yield an LAS in the low 70es indicates that it is possible to develop parsing models for French-based creoles without extensive annotation projects.
Table 1 :
1Distribution in Train/Dev/Test sets of FR-GSD and Martinican Creole (MC) treebanks.
Table 2 :
2Baselines for training on French, Martinican
(MC), and concatenated French+Martinican (FR+MC).
Table 3 :
3MC test performance when optimizing on French and MC.Dev.
Embed. Finet. UAS LAS
French char
no
20.03
9.16
yes
20.03
9.16
POS
no
58.17 45.54
yes
58.17 45.54
BERT
no
33.07 18.38
yes
33.07 18.38
MC
char
no
25.05 11.73
yes
64.71 46.83
POS
no
65.08 51.95
yes
72.87 60.83
BERT
no
38.23 21.63
yes
67.10 49.41
Table 4 :
4Performance with and without fine-tuning on MC.
Table 5 :
5LAS when training on 1/4, 1/2, 3/4 of the best epoch of the French model, fine-tuned on MC or FR+MC.
Embed.
No weight
Weight
UAS LAS UAS LAS
char
70.23 56.58 71.44 58.26
POS
64.67 50.46 64.99 50.12
BERT
69.39 56.33 70.76 56.78
Table 6 :
6Results for MTL with non-weighted and
weighted losses on the MC task. All weighted experi-
ments use 0.9 for French and 0.1 for MC.
Table 7 :
7Results for the MC task using varying weights
for the MTL parser, training on either FR and MC or on
FR+MC and MC and testing on MC.
BERT 54.92 58.47 53.05 58.15 51.66 62.57 55.92 65.44 58.57Noun Verb Adj
Propn Adv
CConj SConj Adp
LAS
baseline char
56.76 63.67 50.36 60.82 57.38 66.80 60.27 68.49 60.57
baseline POS
70.52 72.03 84.45 66.78 87.21 88.12 91.34 89.23 71.77
baseline MTL char
58.03 64.45 50.90 58.53 62.64 71.08 65.38 66.55 61.47
MTL POS
53.00 58.61 45.35 56.93 59.72 68.49 65.01 68.56 57.66
MTL BERT
55.67 60.96 55.68 60.82 58.45 69.48 58.19 65.20 60.08
Table 8 :
8Accuracy of dependency labels per POS tag for FR+MC baseline and best MTL experiments. 0% FR+MC 14.5% 15.3% 34.6% 17.3% 35.8% 51.7% 70.6% 62.9% 13.0%Noun
Verb
Adj Propn
Adv CConj SConj
Adp
Total
French
2.2%
3.5%
5%
0.5%
8.9%
8.4% 48.8% 18.6%
2.2%
MC
5.7%
9.5% 17.9%
1.7% 24.5% 37.5% 61.8% 51.8%
7.
Table 9 :
9Average of ambiguous word types per POS tag.
Experiments training with all French treebanks were computationally more expensive and yielded poorer results.2 https://www.potomitan.info/duranty/ kreyolad.php 3 https://antilla-martinique.com/ 4 https://www.montraykreyol.org/ 5 The treebank will be released in the next UD cycle. 6 https://www.nltk.org/api/nltk. tokenize.html We used the default model (English) since we did not expect any differences in punctuation.
https://github.com/yzhangcs/parser 8 We also experimented with the other large French LM, FlauBERT(Le et al., 2020), but this yielded worse performance
Multilingual Projection for Parsing Truly Low-Resource Languages. Željko Agić, Anders Johannsen, Barbara Plank, Natalie Héctor Martínez Alonso, Anders Schluter, Søgaard, Transactions of the Association for Computational Linguistics. 4Željko Agić, Anders Johannsen, Barbara Plank, Héc- tor Martínez Alonso, Natalie Schluter, and Anders Søgaard. 2016. Multilingual Projection for Parsing Truly Low-Resource Languages. Transactions of the Association for Computational Linguistics, 4:301- 312.
On the frailty of universal POS tags for neural UD parsers. Mark Anderson, Carlos Gómez-Rodríguez, Proceedings of the 24th Conference on Computational Natural Language Learning. the 24th Conference on Computational Natural Language LearningOnlineMark Anderson and Carlos Gómez-Rodríguez. 2020. On the frailty of universal POS tags for neural UD parsers. In Proceedings of the 24th Conference on Computational Natural Language Learning, pages 69-96, Online.
Éléments d'écolinguistique appliqués à la situation martiniquaise. Jean Bernabé, Créoles-Langages et Politiques linguistiques: Actes du XXVI e Colloque International de Linguistique Fonctionnelle-30 septembre-7 octobre 2002 à Gosier (Guadeloupe). Peter LangJean Bernabé. 2004. Éléments d'écolinguistique ap- pliqués à la situation martiniquaise. In Créoles- Langages et Politiques linguistiques: Actes du XXVI e Colloque International de Linguistique Fonctionnelle-30 septembre-7 octobre 2002 à Gosier (Guadeloupe), pages 13-29. Peter Lang.
Universal Dependency parsing for Hindi-English code-switching. Irshad Bhat, A Riyaz, Manish Bhat, Dipti Shrivastava, Sharma, 10.18653/v1/N18-1090Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesNew Orleans, LAIrshad Bhat, Riyaz A. Bhat, Manish Shrivastava, and Dipti Sharma. 2018. Universal Dependency parsing for Hindi-English code-switching. In Proceedings of the 2018 Conference of the North American Chap- ter of the Association for Computational Linguistics: Human Language Technologies, pages 987-998, New Orleans, LA.
IMS at the CoNLL 2017 UD shared task: CRFs and perceptrons meet neural networks. Anders Björkelund, Agnieszka Falenska, Xiang Yu, Jonas Kuhn, 10.18653/v1/K17-3004Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaAnders Björkelund, Agnieszka Falenska, Xiang Yu, and Jonas Kuhn. 2017. IMS at the CoNLL 2017 UD shared task: CRFs and perceptrons meet neural net- works. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Univer- sal Dependencies, pages 40-51, Vancouver, Canada.
A Surface-Syntactic UD Treebank for Naija. Bernard Caron, Marine Courtin, Kim Gerdes, Sylvain Kahane, TLT 2019, Treebanks and Linguistic Theories. Paris, FranceBernard Caron, Marine Courtin, Kim Gerdes, and Syl- vain Kahane. 2019. A Surface-Syntactic UD Tree- bank for Naija. In TLT 2019, Treebanks and Linguis- tic Theories, Syntaxfest, Paris, France.
The HIT-SCIR system for end-to-end parsing of Universal Dependencies. Wanxiang Che, Jiang Guo, Yuxuan Wang, Bo Zheng, Huaipeng Zhao, Yang Liu, Dechuan Teng, Ting Liu, 10.18653/v1/K17-3005Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaWanxiang Che, Jiang Guo, Yuxuan Wang, Bo Zheng, Huaipeng Zhao, Yang Liu, Dechuan Teng, and Ting Liu. 2017. The HIT-SCIR system for end-to-end parsing of Universal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Pars- ing from Raw Text to Universal Dependencies, pages 52-62, Vancouver, Canada.
Bidirectional domain adaptation using weighted multi-task learning. Daniel Dakota, Ali Zeeshan, Sandra Sayyed, Kübler, 10.18653/v1/2021.iwpt-1.10Proceedings of the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021). the 17th International Conference on Parsing Technologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021)OnlineDaniel Dakota, Zeeshan Ali Sayyed, and Sandra Kübler. 2021. Bidirectional domain adaptation us- ing weighted multi-task learning. In Proceedings of the 17th International Conference on Parsing Tech- nologies and the IWPT 2021 Shared Task on Parsing into Enhanced Universal Dependencies (IWPT 2021), pages 93-105, Online.
Delexicalized transfer parsing for low-resource languages using transformed and combined treebanks. Ayan Das, Affan Zaffar, Sudeshna Sarkar, 10.18653/v1/K17-3019Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaAyan Das, Affan Zaffar, and Sudeshna Sarkar. 2017. Delexicalized transfer parsing for low-resource lan- guages using transformed and combined treebanks. In Proceedings of the CoNLL 2017 Shared Task: Mul- tilingual Parsing from Raw Text to Universal Depen- dencies, pages 182-190, Vancouver, Canada.
Deep biaffine attention for neural dependency parsing. Timothy Dozat, Christopher Manning, 5h International Conference on Learning Representations. Toulon, FranceICLR 2017Timothy Dozat and Christopher Manning. 2017. Deep biaffine attention for neural dependency parsing. In 5h International Conference on Learning Represen- tations (ICLR 2017), Toulon, France.
Charte culturelle créole: se pwan douvan avan douvan pwan nou! GEREC, Centre universitaire Antilles-Guyane. Gerec, GEREC. 1982. Charte culturelle créole: se pwan dou- van avan douvan pwan nou! GEREC, Centre univer- sitaire Antilles-Guyane.
Conversion et améliorations de corpus du français annotés en Universal Dependencies. Bruno Guillaume, Marie-Catherine De Marneffe, Guy Perrier, Revue TAL. 602Bruno Guillaume, Marie-Catherine de Marneffe, and Guy Perrier. 2019. Conversion et améliorations de corpus du français annotés en Universal Dependen- cies. Revue TAL, 60(2):71-95.
Les créoles à base française: une introduction. Marie-Christine Hazaël-Massieux, TIPA). 21Travaux Interdisciplinaires du Laboratoire Parole et Langage dMarie-Christine Hazaël-Massieux. 2002. Les créoles à base française: une introduction. Travaux Interdisci- plinaires du Laboratoire Parole et Langage d'Aix-en- Provence (TIPA), 21:63-86.
The INCEpTION platform: Machine-assisted and knowledge-oriented interactive annotation. Jan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart De Castilho, Iryna Gurevych, Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations. the 27th International Conference on Computational Linguistics: System DemonstrationsSanta Fe, New MexicoJan-Christoph Klie, Michael Bugert, Beto Boullosa, Richard Eckart de Castilho, and Iryna Gurevych. 2018. The INCEpTION platform: Machine-assisted and knowledge-oriented interactive annotation. In Proceedings of the 27th International Conference on Computational Linguistics: System Demonstrations, pages 5-9, Santa Fe, New Mexico.
FlauBERT: Unsupervised language model pre-training for French. Hang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Maximin Coavoux, Benjamin Lecouteux, Alexandre Allauzen, Benoît Crabbé, Laurent Besacier, Didier Schwab, 10.18653/v1/2021.conll-1.5Proceedings of the 25th Conference on Computational Natural Language Learning. Miryam de Lhoneux, Chen Qiu, and Anders Søgaardthe 25th Conference on Computational Natural Language LearningMarseille, France. Heather Lent, Emanuele Bugliarello; OnlineProceedings of The 12th Language Resources and Evaluation ConferenceHang Le, Loïc Vial, Jibril Frej, Vincent Segonne, Max- imin Coavoux, Benjamin Lecouteux, Alexandre Al- lauzen, Benoît Crabbé, Laurent Besacier, and Didier Schwab. 2020. FlauBERT: Unsupervised language model pre-training for French. In Proceedings of The 12th Language Resources and Evaluation Con- ference, pages 2479-2490, Marseille, France. Heather Lent, Emanuele Bugliarello, Miryam de Lhoneux, Chen Qiu, and Anders Søgaard. 2021. On language models for creoles. In Proceedings of the 25th Conference on Computational Natural Language Learning, pages 58-71, Online.
Haitian Creole: How to build and ship an MT engine from scratch in 4 days, 17 hours, & 30 minutes. William Lewis, Proceedings of the 14th Annual Conference of the European Association for Machine Translation. the 14th Annual Conference of the European Association for Machine TranslationSaint Raphaël, FranceWilliam Lewis. 2010. Haitian Creole: How to build and ship an MT engine from scratch in 4 days, 17 hours, & 30 minutes. In Proceedings of the 14th Annual Conference of the European Association for Machine Translation, Saint Raphaël, France.
Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. Louis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, 10.18653/v1/2020.acl-main.645Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsOnlineCamemBERT: a tasty French language modelLouis Martin, Benjamin Muller, Pedro Javier Ortiz Suárez, Yoann Dupont, Laurent Romary, Éric de la Clergerie, Djamé Seddah, and Benoît Sagot. 2020. CamemBERT: a tasty French language model. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, Online.
How to parse low-resource languages: Cross-lingual parsing, target language annotation, or both?. Ailsa Meechan-Maddon, Joakim Nivre, Proceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019). the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019)Paris, FranceAssociation for Computational LinguisticsAilsa Meechan-Maddon and Joakim Nivre. 2019. How to parse low-resource languages: Cross-lingual pars- ing, target language annotation, or both? In Pro- ceedings of the Fifth International Conference on Dependency Linguistics (Depling, SyntaxFest 2019), pages 112-120, Paris, France. Association for Com- putational Linguistics.
Krik: First Steps into Crowdsourcing POS tags for Kréyòl Gwadloupéyen. Alice Millour, Karën Fort, CCURL 2018. Miyazaki, JapanAlice Millour and Karën Fort. 2018. Krik: First Steps into Crowdsourcing POS tags for Kréyòl Gwad- loupéyen. In CCURL 2018 , Miyazaki, Japan.
Annotations matter: Leveraging multi-task learning to parse UD and SUD. Ali Zeeshan, Daniel Sayyed, Dakota, 10.18653/v1/2021.findings-acl.305Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. OnlineZeeshan Ali Sayyed and Daniel Dakota. 2021. An- notations matter: Leveraging multi-task learning to parse UD and SUD. In Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021, pages 3467-3481, Online.
Combining global models for parsing Universal Dependencies. Tianze Shi, Felix G Wu, Xilun Chen, Yao Cheng, 10.18653/v1/K17-3003Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesVancouver, CanadaTianze Shi, Felix G. Wu, Xilun Chen, and Yao Cheng. 2017. Combining global models for parsing Uni- versal Dependencies. In Proceedings of the CoNLL 2017 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies, pages 31-39, Van- couver, Canada.
UD Annotatrix: An annotation tool for Universal Dependencies. Francis M Tyers, Mariya Sheyanova, Jonathan North Washington, Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories. the 16th International Workshop on Treebanks and Linguistic TheoriesPrague, Czech RepublicFrancis M. Tyers, Mariya Sheyanova, and Jonathan North Washington. 2017. UD Annotatrix: An annotation tool for Universal Dependencies. In Proceedings of the 16th International Workshop on Treebanks and Linguistic Theories, pages 10-17, Prague, Czech Republic.
A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. Clara Vania, Yova Kementchedjhieva, Anders Søgaard, Adam Lopez, Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsClara Vania, Yova Kementchedjhieva, Anders Søgaard, and Adam Lopez. 2019. A systematic comparison of methods for low-resource dependency parsing on genuinely low-resource languages. In Proceedings of the 2019 Conference on Empirical Methods in Nat- ural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP), pages 1105-1116, Hong Kong, China. Association for Computational Linguistics.
Joint UD parsing of Norwegian Bokmål and Nynorsk. Erik Velldal, Lilja Øvrelid, Petter Hohle, Proceedings of the 21st Nordic Conference on Computational Linguistics. the 21st Nordic Conference on Computational LinguisticsGothenburg, SwedenErik Velldal, Lilja Øvrelid, and Petter Hohle. 2017. Joint UD parsing of Norwegian Bokmål and Nynorsk. In Proceedings of the 21st Nordic Conference on Computational Linguistics, pages 1-10, Gothenburg, Sweden.
Georges Daniel Véronique, Les langues créoles dans la république française: entre démarcation et revendication1. Entre francisation et démarcation.: Usages hérités et usages renaissantistes des langues régionales de France. 201Georges Daniel Véronique. 2020. Les langues créoles dans la république française: entre démarcation et revendication1. Entre francisation et démarcation.: Usages hérités et usages renaissantistes des langues régionales de France, page 201.
From genesis to creole language: Transfer learning for Singlish Universal Dependencies parsing and POS tagging. Hongmin Wang, Jie Yang, Yue Zhang, 10.1145/3321128ACM Trans. Asian Low-Resour. Lang. Inf. Process. 191Hongmin Wang, Jie Yang, and Yue Zhang. 2019. From genesis to creole language: Transfer learning for Singlish Universal Dependencies parsing and POS tagging. ACM Trans. Asian Low-Resour. Lang. Inf. Process., 19(1).
Universal Dependencies parsing for colloquial Singaporean English. Hongmin Wang, Yue Zhang, Guangyong Leonard Chan, Jie Yang, Hai Leong Chieu, 10.18653/v1/P17-1159Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaHongmin Wang, Yue Zhang, GuangYong Leonard Chan, Jie Yang, and Hai Leong Chieu. 2017. Universal Dependencies parsing for colloquial Singaporean En- glish. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, pages 1732-1744, Vancouver, Canada.
CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies. Daniel Zeman, Jan Hajič, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, Slav Petrov, Proceedings of the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal Dependencies. the CoNLL 2018 Shared Task: Multilingual Parsing from Raw Text to Universal DependenciesBrussels, BelgiumDaniel Zeman, Jan Hajič, Martin Popel, Martin Potthast, Milan Straka, Filip Ginter, Joakim Nivre, and Slav Petrov. 2018. CoNLL 2018 shared task: Multilingual parsing from raw text to Universal Dependencies. In Proceedings of the CoNLL 2018 Shared Task: Multi- lingual Parsing from Raw Text to Universal Depen- dencies, pages 1-21, Brussels, Belgium.
Is pos tagging necessary or even helpful for neural dependency parsing?. Houquan Zhou, Yu Zhang, Zhenghua Li, Min Zhang, Natural Language Processing and Chinese Computing. ChamSpringer International PublishingHouquan Zhou, Yu Zhang, Zhenghua Li, and Min Zhang. 2020. Is pos tagging necessary or even help- ful for neural dependency parsing? In Natural Lan- guage Processing and Chinese Computing, pages 179-191, Cham. Springer International Publishing. |
15,865,150 | Kyoto: An Integrated System for Specific Domain WSD | This document describes the preliminary release of the integrated Kyoto system for specific domain WSD. The system uses concept miners (Tybots) to extract domain-related terms and produces a domain-related thesaurus, followed by knowledge-based WSD based on wordnet graphs (UKB). The resulting system can be applied to any language with a lexical knowledge base, and is based on publicly available software and resources. Our participation in Semeval task #17 focused on producing running systems for all languages in the task, and we attained good results in all except Chinese. Due to the pressure of the time-constraints in the competition, the system is still under development, and we expect results to improve in the near future. | [
6022874,
15698938,
9333102,
4357791
] | Kyoto: An Integrated System for Specific Domain WSD
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2010. 2010
Aitor Soroa a.soroa@ehu.es
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Eneko Agirre
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Oier Lopez De Lacalle
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Monica Monachini monica.monachini@ilc.cnr.it
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Jessie Lo
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Shu-Kai Hsieh
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Wauter Bosma w.bosma@let.vu.nl
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Piek Vossen p.vossen@let.vu.nl
Istituto di Linguistica Computazionale
University of the Basque Country
National Taiwan Normal University
Vrije Universiteit
Kyoto: An Integrated System for Specific Domain WSD
Proceedings of the 5th International Workshop on Semantic Evaluation, ACL 2010
the 5th International Workshop on Semantic Evaluation, ACL 2010Uppsala, Sweden; cAssociation for Computational LinguisticsJuly 2010. 2010
This document describes the preliminary release of the integrated Kyoto system for specific domain WSD. The system uses concept miners (Tybots) to extract domain-related terms and produces a domain-related thesaurus, followed by knowledge-based WSD based on wordnet graphs (UKB). The resulting system can be applied to any language with a lexical knowledge base, and is based on publicly available software and resources. Our participation in Semeval task #17 focused on producing running systems for all languages in the task, and we attained good results in all except Chinese. Due to the pressure of the time-constraints in the competition, the system is still under development, and we expect results to improve in the near future.
Introduction
In this paper we describe the participation of the integrated Kyoto system on the "SemEval-2010 task #17: All-words Word Sense Disambiguation on a Specific Domain" task (Agirre et al., 2010). The goal of our participation was to evaluate the preliminary release of the integrated system for specific domain WSD developed for the Kyoto project 1 . Besides, we wanted to test the performance of our domain specific WSD system ) on this test set, and to integrate the thesaurus construction software (Tybots) developed for the project. The system can be run for any language and domain if provided with a lexical knowledge base and some background documents on the domain.
We will first present the components of our system, followed by the experimental design and the 1 http://www. kyoto-project.eu results. Finally, the conclusions are presented.
The Kyoto System for Domain Specific WSD
We will present in turn UKB, the Tybots, and the lexical knowledge-bases used.
UKB
UKB is a knowledge-based unsupervised WSD system which exploits the structure of an underlying Language Knowledge Base (LKB) and finds the most relevant concepts given an input context . UKB starts by taking the LKB as a graph of concepts G = (V, E) with a set of vertices V derived from LKB concepts and a set of edges E representing relations among them. Giving an input context, UKB applies the so called Personalized PageRank (Haveliwala, 2002) over it to obtain the most representative senses for the context. PageRank (Brin and Page, 1998) is a method for scoring the vertices V of a graph according to each node's structural importance. The algorithm can be viewed as random walk process that postulate the existence of a particle that randomly traverses the graph, but at any time may jump to a new vertex with a given damping factor (also called teleport probability). After PageRank calculation, the final weight of node i represents the proportion of time that a random particle spends visiting node i after a sufficiently long time. In standard PageRank, the teleport vector is chosen uniformly, whereas for Personalized PageRank it is chosen from a nonuniform distribution of nodes, specified by a teleport vector.
UKB concentrates the initial probability mass of the teleport vector in the words occurring in the context of the target word, causing all random jumps on the walk to return to these words and thus assigning a higher rank to the senses linked to these words. Moreover, the high rank of the words spreads through the links in the graph and make all the nodes in its vicinity also receive high ranks. Given a target word, the system checks which is the relative ranking of its senses, and the WSD system would output the one ranking highest.
UKB is very flexible and can be use to perform WSD on different settings, depending on the context used for disambiguating a word instance. In this paper we use it to perform general and domain specific WSD, as shown in section 3. PageRank is calculated by applying an iterative algorithm until convergence below a given threshold is achieved. Following usual practice, we used a damping value of 0.85 and set the threshold value at 0.001. We did not optimize these parameters.
Tybots
Tybots (Term Yielding Robots) are text mining software that mine domain terms from corpus (e.g. web pages), organizing them in a hierarchical structure, connecting them to wordnets and ontologies to create a semantic model for the domain (Bosma and Vossen, 2010). The software is freely available using Subversion 2 . Tybots try to establish a view on the terminology of the domain which is as complete as possible, discovering relations between terms and ranking terms by domain relevance.
Preceding term extraction, we perform tokenization, part-of-speech tagging and lemmatization, which is stored in Kyoto Annotation Format (KAF) (Bosma et al., 2009). Tybots work through KAF documents, acquire domain relevant terms based on the syntactic features, gather cooccurrence statistics to decide which terms are significant in the domain and produce a thesaurus with sets of related words. Section 3.3 describes the specific settings that we used.
Lexical Knowledge bases
We used the following wordnets, as suggested by the organizers: WN30g: English WordNet 3.0 with gloss relations (Fellbaum, 1998). Dutch: The Dutch LKB is part of the Cornetto database version 1.3 (Vossen et al., 2008). The Cornetto database can be obtained from the Dutch/Flanders Taalunie 3 . Cornetto comprises taxonomic relations and equivalence rela- (Tsai et al., 2001). The Chinese WordNet is also mapped to WordNet 3.0. Table 1 shows the sizes of the graphs created using each LKB as a source. The upper part shows the number of lexical entries, synsets and relations of each LKB. It also depicts the number of links to English WordNet 3.0 synsets.
In addition, we also created bilingual graphs for Dutch, Italian and Chinese, comprising the original monolingual LKB, the links to WordNet 3.0 and WordNet 3.0 itself. We expected this richer graphs to perform better performance. The sizes of the bilingual graphs are shown in the lower side of Table 1.
Experimental setting
All test documents were lemmatized and PoStagged using the linguistic processors available within the Kyoto project. In this section we describe the submitted runs.
UKB parameters
We use UKB with the default parameters. In particular, we don't use dictionary weights, which in the case of English come from annotated corpora. This is done in order to make the system fully unsupervised. It's also worth mentioning that in the default setting parts of speech were not used. Table 2: Overall results of our runs, including precision (P) and recall (R), overall and for each PoS. We include the First Sense (1sense) and random baselines, as well as the best run, as provided by the organizers.
Run1: UKB using context
The first run is an application of the UKB tool in the standard setting, as described in . Given the input text, we split it in sentences, and we disambiguate each sentence at a time. We extract the lemmas which have an entry in the LKB and then apply Personalized PageRank over all of them, obtaining a score for every concept of the LKB. To disambiguate the words in the sentence we just choose its associated concept (sense) with maximum score. In our experiments we build a context of at least 20 content words for each sentence to be disambiguated, taking the sentences immediately before when necessary. UKB allows two main methods of disambiguation, namely ppr and ppr w2w. We used the latter method, as it has been shown to perform best.
In this setting we used the monolingual graphs for each language (cf. section 2.3). Note that in this run there is no domain adaptation, it thus serves us as a baseline for assessing the benefits of applying domain adaptation techniques.
Run2: UKB using related words
Instead of disambiguating words using their context of occurrence, we follow the method described in ). The idea is to first obtain a list of related words for each of the target words, as collected from a domain corpus. On a second step each target word is disambiguated using the N most related words as context (see below). For instance, in order to disambiguate the word environment, we would not take into account the context of occurrence (as in Section 3.2), but we would use the list of most related words in the thesaurus (e.g. "biodiversity, agriculture, ecosystem, nature, life, climate, . . ."). Using UKB over these contexts we obtain the most predominant sense for each target word in the domain (McCarthy et al., 2007), which is used to label all occurrences of the target word in the test dataset.
In order to build the thesaurus with the lists of related words, we used Tybots (c.f. section 2.2), one for each corpus of the evaluation dataset, i.e. Chinese, Dutch, English, and Italian. We used the background documents provided by the organizers, which we processed using the linguistic processors of the project to obtain the documents in KAF. We used the Tybots with the following settings. We discarded co-occurring words with frequencies below 10 5 . Distributional similarity was computed using (Lin, 1998). Finally, we used up to 50 related words for each target word.
As in run1, we used the monolingual graphs for the LKBs in each language.
Run3: UKB using related words and bilingual graphs
The third run is exactly the same as run2, except that we used bilingual graphs instead of monolingual ones for all languages other than English (cf. section 2.3). There is no run3 for English. Table 2 shows the results of our system on the different languages. We will analyze different aspects of the results in turn. Domain adaptation: Using Personalized Pagerank over related words (run2 and run3) consistently outperforms the standard setting (run1) in all languages. This result is consistent with our previous work on English , and shows that domain adaptation works for knowledge-based systems. Monolingual vs. Bilingual graphs: As expected, we obtained better results using the bilingual graphs (run3) than with monolingual graphs (run2), showing that the English WordNet has a richer set of relations, and that those relations can be successfully ported to other languages. This confirms that aligning different wordnets at the synset level is highly beneficial.
Results
Overall results: the results of our runs are highly satisfactory. In two languages (Dutch and Italian) our best runs perform better than the first sense baseline, which is typically hard to beat for knowledge-based systems. In English, our system performs close but below the first sense baseline, and in Chinese our method performed below the random baseline. The poor results obtained for Chinese can be due the LKB topology; an analysis over the graph shows that it is formed by a large number of small components, unrelated with each other. This 'flat' structure heavily penalizes the graph based method, which is many times unable to discriminate among the concepts of a word. We are currently inspecting the results, and we don't discard bugs, due to the preliminary status of our software. In particular, we need to re-examine the output of the Tybot for Chinese.
Conclusions
This paper describes the results of the preliminary release of he integrated Kyoto system for domain specific WSD. It comprises Tybots to construct a domain-related thesaurus, and UKB for knowledge-based WSD based on wordnet graphs. We applied our system to all languages in the dataset, obtaining good results. In fact, our system can be applied to any language with a lexical knowledge base, and is based on publicly available software and resources. We used the wordnets and background texts provided by the organizers of the task.
Our results show that we were succesful in adapting our system to the domain, as we managed to beat the first sense baseline in two languages. Our results also show that adding the English WordNet to the other language wordnets via the available links is beneficial.
Our participation focused on producing running systems for all languages in the task, and we attained good results in all except Chinese. Due to the pressure and the time-constraints in the competition, the system is still under development. We are currently revising our system for bugs and finetuning it.
Table 1 :
1Wordnets and their sizes (entries, synsets, relations and links to WN30g).tions from both WordNet 2.0 and 3.0. Cornetto
concepts are mapped to English WordNet 3.0.
Italian: Italwordnet (Roventini et al., 2003) was
created in the framework of the EuroWordNet,
employs the same set of semantic relations used
in EuroWordNet, and includes links to WordNet
3.0 synsets.
Chinese: The Chinese WordNet (Version 1.6) is
now partially open to the public 4
http://kyoto.let.vu.nl/svn/kyoto/trunk 3 http://www.inl.nl/nl/lexica/780
http://cwn.ling.sinica.edu.tw
In the case of Dutch we did not use any threshold due to the small size of the background corpus.
AcknowledgmentsThis work task is partially funded by the European Commission (KYOTO ICT-2007-211423), the Spanish Research Department (KNOW-2 TIN2009-14715-C04-01) and the Basque Government (BERBATEK IE09-262).
Personalizing pagerank for word sense disambiguation. E Agirre, A Soroa, Proceedings of EACL09. EACL09Association for Computational LinguisticsE. Agirre and A. Soroa. 2009. Personalizing pagerank for word sense disambiguation. In Proceedings of EACL09, pages 33-41. Association for Computational Linguistics.
Knowledge-based wsd on specific domains: Performing better than generic supervised wsd. E Agirre, O López De Lacalle, A Soroa, Proceedigns of IJ-CAI. eedigns of IJ-CAIE. Agirre, O. López de Lacalle, and A. Soroa. 2009. Knowledge-based wsd on specific domains: Performing better than generic supervised wsd. In Proceedigns of IJ- CAI. pp. 1501-1506.".
Semeval-2010 task 17: All-words word sense disambiguation on a specific domain. E Agirre, O López De Lacalle, C Fellbaum, S K Hsieh, M Tesconi, M Monachini, P Vossen, R Segers, Same volume. E. Agirre, O. López de Lacalle, C. Fellbaum, S.K. Hsieh, M. Tesconi, M. Monachini, P. Vossen, and R. Segers. 2010. Semeval-2010 task 17: All-words word sense dis- ambiguation on a specific domain. In Same volume.
Bootstrapping language neutral term extraction. W E Bosma, P Vossen, Proceedings of LREC2010. LREC2010W. E. Bosma and P. Vossen. 2010. Bootstrapping language neutral term extraction. In Proceedings of LREC2010, May.
KAF: a generic semantic annotation format. W E Bosma, P Vossen, A Soroa, G Rigau, M Tesconi, A Marchetti, M Monachini, C Aliprandi, Proceedings of the GL2009 Workshop on Semantic Annotation. the GL2009 Workshop on Semantic AnnotationW. E. Bosma, P. Vossen, A. Soroa, G. Rigau, M. Tesconi, A. Marchetti, M. Monachini, and C. Aliprandi. 2009. KAF: a generic semantic annotation format. In Proceed- ings of the GL2009 Workshop on Semantic Annotation.
The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems. S Brin, L Page, 30S. Brin and L. Page. 1998. The anatomy of a large-scale hypertextual web search engine. Computer Networks and ISDN Systems, 30(1-7).
WordNet: An Electronical Lexical Database. C Fellbaum, The MIT PressCambridge, MAC. Fellbaum. 1998. WordNet: An Electronical Lexical Database. The MIT Press, Cambridge, MA.
Topic-sensitive pagerank. H Haveliwala, WWW '02: Proceedings of the 11th international conference on WWW. New York, NY, USAACMH. Haveliwala. 2002. Topic-sensitive pagerank. In WWW '02: Proceedings of the 11th international conference on WWW, pages 517-526, New York, NY, USA. ACM.
Automatic retrieval and clustering of similar words. D Lin, Proceedings of ACL98. ACL98Montreal, CanadaD. Lin. 1998. Automatic retrieval and clustering of similar words. In Proceedings of ACL98, Montreal, Canada.
Unsupervised acquisition of predominant word senses. D Mccarthy, R Koeling, J Weeds, J Carroll, Computational Linguistics. 433D. McCarthy, R. Koeling, J. Weeds, and J. Carroll. 2007. Unsupervised acquisition of predominant word senses. Computational Linguistics, 33(4).
Italwordnet: building a large semantic database for the automatic treatment of Italian. A Roventini, A Alonge, F Bertagna, N Calzolari, J Cancila, C Girardi, B Magnini, R Marinelli, M Speranza, A Zampolli, Linguistica Computazionale, Special Issue (XVIII-XIX). A. Roventini, A. Alonge, F. Bertagna, N. Calzolari, J. Can- cila, C. Girardi, B. Magnini, R. Marinelli, M. Speranza, and A. Zampolli. 2003. Italwordnet: building a large semantic database for the automatic treatment of Italian. Linguistica Computazionale, Special Issue (XVIII-XIX), pages 745-791.
Definition and tests for lexical semantic relations in Chinese. B S Tsai, C R Huang, S C Tseng, J Y Lin, K J Chen, Y S Chuang, Proceedings CLSW. CLSWB.S. Tsai, C.R. Huang, S.c. Tseng, J.Y. Lin, K.J. Chen, and Y.S. Chuang. 2001. Definition and tests for lexical se- mantic relations in Chinese. In Proceedings CLSW 2001.
The cornetto database: the architecture and alignment issues. P Vossen, I Maks, R Segers, H Van Der, H Vliet, Van Zutphen, Proceedings GWC 2008. GWC 2008P. Vossen, I. Maks, R. Segers, H. van der Vliet, and H. van Zutphen. 2008. The cornetto database: the architecture and alignment issues. In Proceedings GWC 2008, pages 485-506. |
11,595,667 | Rule-Based MWE Identification and Predominant-Supersense Tagging | This paper presents our approach towards the SemEval-2016 Task 10 -Detecting Minimal Semantic Units and their Meanings. Systems are expected to provide a representation of lexical semantics by (1) segmenting tokens into words and multiword units and(2)providing a supersense tag for segments that function as nouns or verbs. Our pipeline rule-based system uses no external resources and was implemented using the mwetoolkit. First, we extract and filter known MWEs from the training corpus. Second, we group input tokens of the test corpus based on this lexicon, with special treatment for non-contiguous expressions. Third, we use an MWE-aware predominantsense heuristic for supersense tagging. We obtain an F-score of 51.48% for MWE identification and 49.98% for supersense tagging. | [
10738628,
729163,
17741516,
203279,
14182801,
30695478,
1226876,
2390655
] | Rule-Based MWE Identification and Predominant-Supersense Tagging
June 16-17, 2016. 2016. 2016 Task 10
Silvio Ricardo Cordeiro
Universidade Federal do Rio Grande do Sul
Porto AlegreBrazil
LIF UMR 7279
Aix Marseille Université
CNRS
Carlos Ramisch
LIF UMR 7279
Aix Marseille Université
CNRS
Aline Villavicencio avillavicencio@inf.ufrgs.brcarlos.ramisch@lif.univ-mrs.fr
Universidade Federal do Rio Grande do Sul
Porto AlegreBrazil
Rule-Based MWE Identification and Predominant-Supersense Tagging
Association for Computational Linguistics UFRGS&LIF at SemEval
SemEval-2016San Diego, CaliforniaJune 16-17, 2016. 2016. 2016 Task 10
This paper presents our approach towards the SemEval-2016 Task 10 -Detecting Minimal Semantic Units and their Meanings. Systems are expected to provide a representation of lexical semantics by (1) segmenting tokens into words and multiword units and(2)providing a supersense tag for segments that function as nouns or verbs. Our pipeline rule-based system uses no external resources and was implemented using the mwetoolkit. First, we extract and filter known MWEs from the training corpus. Second, we group input tokens of the test corpus based on this lexicon, with special treatment for non-contiguous expressions. Third, we use an MWE-aware predominantsense heuristic for supersense tagging. We obtain an F-score of 51.48% for MWE identification and 49.98% for supersense tagging.
Introduction
Accurate segmentation and semantic disambiguation of minimal text units is a major challenge in the general pipeline of NLP applications. A machine translation system, for example, needs to decide what is the intended meaning for a given word or phrase in its context, so that it may translate it into an equivalent meaning in the target language.
While determining the meaning of single words is a difficult task on its own, the problem is compounded by the pervasiveness of Multiword Expressions (MWEs). MWEs are semantic units that span over multiple lexemes in the text (e.g. dry run, look up, fall flat). Their meaning cannot be inferred by applying regular composition rules on the meanings of their component words. The task of semantic tagging is thus deeply intertwined with the identification of multiword expressions.
This paper presents our solution to the DiMSUM shared task (Schneider et al., 2016), where the evaluated systems are expected to perform both semantic tagging and multiword identification. Our pipeline system first detects and groups MWEs and then assigns supersense tags, as two consecutive steps. For MWE identification, we use a task-specific instantiation of the mwetoolkit (Ramisch, 2015), handling both contiguous and non-contiguous MWEs with some degree of customization (Cordeiro et al., 2015). Additionally, MWE type-level candidates are extracted without losing track of their tokenlevel occurrences, to guarantee that all the MWE occurrences learned from the training data are projected onto the test corpus. For semantic tagging we adopted a predominant-sense heuristic.
In the remainder of this paper, we present related work ( § 2), then we present and discuss the results of the MWE identification subsystem ( § 3) and of the supersense tagging subsystem ( § 4). We then conclude and share ideas for future improvements ( § 5).
Related Work
Practical solutions for rule-based MWE identification include tools like jMWE (Kulkarni and Finlayson, 2011), a library for direct lexicon projection based on preexisting MWE lists. Finite-state transducers can also be used to take into account the internal morphology of component words and perform efficient tokenization based on MWE dictionaries (Savary, 2009). The problem of MWE identification has also been modeled using supervised machine learning. Probabilistic MWE taggers usually encode the data using a begin-inside-outside scheme and learn CRF-like taggers on it (Constant and Sigogne, 2011;Schneider et al., 2014). The mwetoolkit (Ramisch, 2015) provides command-line programs that allow one to discover new MWE candidate lists, filter them and project them back on text according to some parameters. Our system uses the latter as basis for MWE identification.
Word sense disambiguation (WSD) methods can be roughly classified into knowledge-based, supervised and unsupervised. Knowledge-based methods use lexico-semantic taxonomies like WordNet to calculate the similarity between context and target words (Lesk, 1986). Supervised approaches generally use context-sensitive classifiers (Cabezas et al., 2001). Unsupervised approaches using clustering and distributional similarity (Brody and Lapata, 2008;Goyal and Hovy, 2014) can also be employed for WSD. Both supervised and unsupervised WSD techniques have also been used to distinguish literal from idiomatic uses of MWEs (Fazly et al., 2009;Diab and Bhutada, 2009). Nonetheless, systematically choosing the most frequent sense is a surprisingly good baseline, not always easy to beat (Mc-Carthy et al., 2007;Navigli, 2009). This was also verified for MWE disambiguation (Uchiyama et al., 2005). Thus, in this work, we implemented a simple supervised predominant-sense heuristic and will investigate more sophisticated WSD techniques as future work.
MWE Identification
Our MWE identification algorithm uses 6 different rule configurations, targeting different MWE classes. Three of these are based on data from the training corpus, while the other three are unsupervised. The parameters of each configuration are optimized on a held-out development set, consisting of 1 ⁄9 of the training corpus. The final system is the union of all configurations. 1 For the 3 supervised configurations, annotated MWEs are extracted from the training data and then filtered: we only keep combinations that have been annotated often enough in the training corpus. In 1 When there is an overlap, we favor longer MWEs. other words, we keep MWE candidates whose proportion of annotated instances with respect to all occurrences in the training corpus is above a threshold t, discarding the rest. The thresholds were manually chosen based on what seemed to yield better results on the development set. Finally, we project the resulting list of MWE candidates on the test data, that is, we segment as MWEs the test token sequences that are contained in the lexicon extracted from the training data. These configurations are:
CONTIG Contiguous MWEs annotated in the training corpus are extracted and filtered with a threshold of t = 40%. That is, we create a lexicon containing all contiguous lemma+POS sequences for which at least 40% of the occurrences in the training corpus were annotated. The resulting lexicon is projected on the test corpus whenever that contiguous sequence of words is seen.
GAPPY Non-contiguous MWEs are extracted from the training corpus and filtered with a threshold of t = 70%. The resulting MWEs are projected on the test corpus using the following rule: an MWE is deemed to occur if its component words appear sequentially with at most a total of 3 gap words in between them.
NOUN 2 -KN Collect all noun-noun sequences in the test corpus that also appear at least once in the training corpus (known compounds), and filter them with a threshold of t = 70%. The resulting list is projected onto the test corpus.
We further developed 3 additional configurations based on empirical findings. We identify MWEs in the test corpus based on POS-tag patterns, without any filtering (and thus without looking at the training corpus) 2 :
NOUN 2 -UKN Collect all noun-noun sequences in the test corpus that never appear in the training corpus (unknown compounds), and project all of them back on the test corpus.
PROPN 2..∞ Collect sequences of two or more contiguous words with POS-tag PROPN and project all of them back onto the test corpus. VP Collect verb-particle candidates and project them back onto the test corpus. A verb-particle candidate is a pair of words under these constraints: the first word must have POS-tag VERB and cannot have lemma go or be. The two words may be separated by a N 3 or PROPN. The second word must be in a list of frequent non-literal particles 4 . Finally, the particle must be followed by a word with one of these POStags: ADV, ADP, PART, CONJ, PUNCT. Even though we might miss some cases, this final delimiter avoids capturing regular verb-PP sequences. Table 1 presents the results for each isolated configuration (evaluated on the test corpus, with all MWEs). These results are calculated based on the fuzzy metrics of the shared task (Schneider et al., 2014), where partial MWE matches are taken into account. Our final MWE identification system is the union of all rule configurations described above. recall for N_N compounds. The most common false positive errors are presented below.
Error Analysis
• Not in the same phrase In 19 cases, our system has identified two Ns that are not in the same phrase; e.g. *when I have a problem customer services don't want to know. In order to realize that these nouns are not related, we would need parsing information. Nonetheless, it is not clear whether an off-the-shelf parser could solve these ambiguities in the absence of punctuation.
• Partial N_N_N 17 cases have been missed due to only the first two nouns in the MWE being identified; e.g. *Try the memory foam pillows! -instead of memory foam pillows.
• Partial ADJ_N_N 10 cases have been missed; e.g. *My sweet pea plants arrived 00th May completely dried up and dead! -instead of sweet pea plants. These cases are a consequence of the fact that we do not look for adjective-noun pairs (see ADJ_N errors below).
• Compositional N_N In 24 cases, our system identified a compositional compound; e.g. *Quality gear guys, excellent! Semantic features would be required to filter such cases out.
• Questionable N tags 10 false noun compounds were found due to words such as today being tagged as nouns (e.g. *I'm saving gas today). Another 5 cases had adjectives classified as nouns: *Maybe this is a kind of an artificial way to read an e-book.
VERB_ADP errors Most of the VERB_ADP expressions were caught by the VP configuration, but we still had some false negatives. In 7 cases, the underlying particle was not in our list (e.g. I regret ever going near their store), while in 9 other cases, the particle was followed by a noun phrase (e.g. Givin out Back shots). 5 of the missed MWEs could have been found by accepting the particle to be followed by a SCONJ, or to be followed by the end of the line as delimiters. Most of the false positives were due to the verb being followed by an indirect object or prepositional phrase. We believe that disambiguating these cases would require valency information, either from a lexicon or automatically acquired from large corpora (Preiss et al., 2007).
ADJ_N errors While the few ADJ_N pairs that our system identified were usually correct MWEs, most of the annotated cases were missed. This is because we do not specifically look for adjective-noun pairs, due to the high likelihood of them being compositional. For example, a simple ADJ_N annotation scheme (as performed in NOUN 2 -UKN) would have achieved a precision of only 69/505 = 14%. Out of all annotated sentences, in 23 cases the noun is transparent, and we could replace the adjective by a synonym; e.g. I guess people are going again next week, do you think you'll go? (which could be replaced by the following week). In another 17 cases, the noun is transparent and the adjective suggestive of the global meaning, even though it is fixed; e.g. 23 is the lucky number (but not *fortunate number, albeit related to luck).
These cases could be dealt with using fixedness tests such as substitution and permutation (Fazly et al., 2009;Ramisch et al., 2008).
PROPN_PROPN errors Since our system looks for all occurrences of adjacent PROPN pairs, we obtain near-perfect recall for PROPN_PROPN compounds. Most false positives were caused by possessives or personal titles, which were annotated as part of the MWE in the gold standard.
VERB_PART errors The results for VERB_PART are similar to the ones found for VERB_ADP: 3 false negatives are due to the particle not being in our list, and in another 7 cases they are followed by a noun phrase. Additionally, in 6 cases the particle was fol-lowed by a verb (e.g. Stupid Kilkenny didn't get to meet @Royseven). 4 false positives were CONTIG cases of go to being identified as a MWE (e.g. *In my mother's day, she didn't go to college). In the training corpus, this MWE had been annotated 57% of the time, but in future constructions (e.g. Definitely not going to purchase a car from here). Canonical forms would be easy to model with a specific contextual rule of the form going to verb.
PROPN_N errors While the few PROPN_N pairs we found were all correct MWEs, most of the annotated cases were missed. These cases did not earn special attention during the development of the system due to an incorrectly perceived infrequency. However, using only an annotation scheme such as NOUN 2 -UKN, we could have achieved a precision of 72% for these MWEs.
N_N_N errors The occurrence of N_N_N sequences is rare in the training corpus, and we did not specifically look for them, which explains our recall of 0%. By annotating the longest sequence of Ns in the corpus (NOUN 2..∞ ), we could have obtained a precision of 56% and recall of 91% for N_N_N. The precision of N_N would also increase to 70% (with a recall of 93%). If we then replace NOUN 2 by NOUN 2..∞ , the full-system's F-score increases to 56.23%.
ADP_N errors The false positives were ambiguous determinerless PPs that can be compositional or not according to the context. For instance, the system identified *Try them all, in order after seeing The Big Lebowski is in order tonight. False negatives were mainly due to threshold-based filters, like at all and in peace. Unsupervised MWE discovery on large corpora using context-sensitive association measures could have helped in these cases.
VERB_N errors We only generated 4 false positives, which look like light-verb constructions missed by the annotators (give ride, place order) False negatives include 8 cases of gerunds POStagged as verbs (e.g. to listen to flying saucers), which are actualy similar to ADJ_N cases discussed above. We also found 7 false negatives, mainly lightverb constructions, that were not present in the training corpus (take place, take control).
DET_N errors 8 false negatives were compositional time adjuncts (e.g. this morning, this season). False positives are mainly cases that seem inconsistent between training and test data concerning frequent quantifiers (e.g. a lot, a bit, a couple).
Noun compounds (two or more Ns in a row) account for a significant proportion of MWEs in the training corpus ( 601 /4232 = 14%) and an even larger amount of the testing corpus ( 203 /837 = 24%). The NOUN 2 rule sets were essential to obtaining good results. If we remove NOUN 2 from our system, its global performance would drop to a fuzzy F 1 = 33.79%.
The domain of the corpus does not seem to have a great influence on our method's performance. Our lowest performance is on the Reviews subcorpus (fuzzy F 1 = 49.57%) and our best performance is on TED (fuzzy F 1 = 56.76%).
Some of the missed MWEs are questionable and we feel that our system should not annotate them. These include regular verbal chains (shouldn't have, have been), infinitival and selected preposition to (to take, go to) and compositional noun phrases (this Saturday). Fortunately, these cases correspond to a small proportion of the data.
Supersense Tagging
Supersense tagging takes place after MWE identification. Sense tags are coarse top-level Wordnet synsets. The tagset for nouns and verbs has respectively 26 and 15 supersense tags. We use a predominant-sense heuristic to perform WSD.
Before tagging the test data, our system collects all annotated supersense tags from MWEs in the training corpus. We create a mapping with entries of the form (w 1 , w 2 , . . . , w N ) → S, where each MWE component w i = (lemma i , POStag i ). This mapping indicates the most frequent tag S associated a given MWE. Single words are treated as length-1 MWEs and are also added to this mapping.
The supersense tagging algorithm then goes through all segmented units (MWEs or single words) in the test corpus and annotates them according to the most common tag seen in the training set. If a tag has not been seen for a given word or MWE, we do not tag it at all. This heuristic is very simple and not very realistic. Nonetheless, it allowed us to have a minimal supersense tagger quickly and then focus on accurate MWE identification as the main contribution of our system.
Error Analysis
Tables 3 and 4 show the confusion matrices of our system for the 10 most common tags. Each row corresponds to a gold tag and contains the distribution of predicted tags. The perfect system would have numbers only in the main diagonal and zeros everywhere else. The skewed distribution of supersense tags makes our simple heuristic quite effective when the MWE/word has been observed in the training data.
Known nouns seem easy to tag. Most of our errors come from the fact that we did not observe instances of a noun in the training data, and thus did not assign it any tag (column "skipped"). Some distinctions seem harder than others due to similar semantic classes: attributive/cognition and event/time.
The occurrence of verbs in the training data is less of a problem than their polysemy. Stative verbs correspond to the large majority of verbs in the dataset. This is magnified by the nature of the corpus: reviews tend to use stative verbs to talk about product characteristics, tweets often use them to describe the state of the author. While very frequent, stative verbs are also difficult to disambiguate: most false negatives were tagged as change verbs while most false positives were tagged as social verbs. Some distinctions seem extremely hard to make, specially for less frequent supersense tags like contact/motion and perception/cognition.
Conclusions and Future Work
We developed a simple rule-based system that was able to obtain competitive results. Its main advantage is that it was very quick to implement in the context of the generic framework of the mwetoolkit. The system is freely available as part of the official mwetoolkit release. 5 The main limitation of our system is that it cannot properly take unseen MWEs into account and generalize from seen instances. Moreover, most of our rule sets are highly language dependent.
Ideas for future improvements include: • Adding specific rules for verb-particle constructions, probably based on a lexicon of idiomatic combinations.
• Replacing the CONTIG method by a sequence tagger for contiguous MWEs (e.g. using a CRF), in order to identify unknown MWEs based on generalizations made from known MWEs (Constant and Sigogne, 2011;Schneider et al., 2014).
• Taking parse trees into account to distinguish MWEs from accidental cooccurrences (Nasr et al., 2015).
• Using semantic-based association measures and semantic-based features based on word embeddings to target idiomatic MWEs (Salehi et al., 2015).
• Using fixedness features to identify and disambiguate very productive patterns like ADJ_N (Ramisch et al., 2008;Fazly et al., 2009).
• Developing a more realistic WSD algorithm for supersense tagging, able to tag unseen words and MWEs and to take context into account.
The final recall of the system is not the sum of coverage values because MWE candidate lexicons may overlap (multiple configurations may have identified the same MWE).Table 1: Precision and coverage per MWE annotation. Coverage is the recall of each configuration applied independently.Configuration Precision Coverage
CONTIG
57.9%
11.6%
GAPPY
36.0%
0.9%
NOUN 2 -KN
100.0%
1.6%
NOUN 2 -UKN
80.2%
18.9%
PROPN 2..∞
96.0%
8.5%
VP
71.2%
4.2%
Table 2
2presents the system results for the most common POS-tag sequences in the test corpus, using an exact match (a MWE is either correct or incorrect). Overall results are presented in both exact and fuzzy metrics.N_N errors Since our system looks for all occurrences of adjacent noun-noun pairs, we obtain a high 3 In the remainder of the paper, we abbreviate the POS tag NOUN as N.4 The 13 most frequentnon-literal particles: about, around, away, back, down, in, into, off, on, out, over, through, up (Sinclair, 2012).POS-tags
Precision
Recall
F1
N_N
170 /278 = 61%
170 /181 = 94% 74.0%
VERB_ADP
43 /60 = 72%
43 /73 = 59% 64.9%
ADJ_N
5 /6 = 83%
5 /69 = 7% 12.9%
PROPN_PROPN 65 /82 = 79%
65 /66 = 98% 87.5%
VERB_PART
31 /37 = 84%
31 /49 = 63% 72.0%
PROPN_N
1 /1 = 100%
1 /34 = 3%
5.8%
N_N_N
0 /0 = 100%
0 /22 = 0%
0.0%
ADP_N
10 /14 = 71%
10 /22 = 45% 55.1%
VERB_N
1 /5 = 20%
1 /16 = 6%
9.2%
DET_N
4 /23 = 17%
4 /16 = 25% 20.2%
ADJ_N_N
0 /0 = 100%
0 /11 = 0%
0.0%
Overall (exact) 364 /613 = 59%
364 /837 = 43% 50.2%
Overall (fuzzy) 460 /635 = 72% 461 /1115 = 41% 52.6%
Table 2 :
2MWE identification results on test set per POS-tag.
Table 3 :
3Confusion matrix for noun supersense tagging. Skipped segments are those absent in training data.Gold tag
S T A T
C O M M
C O G N
C H A N
E M O T
M O T I
P E R C
P O S S
S O C I
C O N T
s k i p p e d
t o t a l
v.stative
617
5
21
3
3
14
4
47
1
53
769
v.communic
8 201
3
2
7
1
4
14
4
36
280
v.cognition
14
14 158
1
1
2
11
22
1
25
250
v.change
49
5
2
67
12
6
22
6
39
210
v.emotion
2
7
44
1
72
3
1
12
143
v.motion
5
2
9
77
8
20
122
v.perception
1
16
1
2
1
69
8
1
10
109
v.possession
17
5
1
1
1
43
5
5
79
v.social
16
2
3
2
5
29
18
75
v.contact
10
4
1
2
2
14
4
3
10
15
70
not-a-verb
355
9
9
12
7
7
4
405
Table 4 :
4Confusion matrix for verb supersense tagging. Skipped segments are those absent in training data.
For NOUN 2 -UKN, we exclude known compounds, as otherwise that would undo the filtering work done by NOUN 2 -KN.
http://mwetoolkit.sourceforge.net
AcknowledgmentsThis work has been funded by the French Agence Nationale pour la Recherche through projects PARSEME-FR (ANR-14-CERA-0001) and ORFEO (ANR-12-CORP-0005), and by French-Brazilian cooperation projects CAMELEON (CAPES-COFECUB #707/11) and AIM-WEST (FAPERGS-INRIA 1706-2551/13-7). Part of the results presented in this paper were obtained through research on a project titled "Simplificação Textual de Expressões Complexas", sponsored by Samsung Eletrônica da Amazônia Ltda. under the terms of Brazilian federal law No. 8.248/91.
Good neighbors make good senses: Exploiting distributional similarity for unsupervised WSD. Samuel Brody, Mirella Lapata, Proceedings of the 22nd International Conference on Computational Linguistics. the 22nd International Conference on Computational LinguisticsManchester, UKSamuel Brody and Mirella Lapata. 2008. Good neigh- bors make good senses: Exploiting distributional sim- ilarity for unsupervised WSD. In Proceedings of the 22nd International Conference on Computational Lin- guistics (Coling 2008), pages 65-72, Manchester, UK, August. Coling 2008 Organizing Committee.
Supervised sense tagging using support vector machines. Clara Cabezas, Philip Resnik, Jessica Stevens, Proceedings of SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation Systems. SENSEVAL-2 Second International Workshop on Evaluating Word Sense Disambiguation SystemsToulouse, FranceAssociation for Computational LinguisticsClara Cabezas, Philip Resnik, and Jessica Stevens. 2001. Supervised sense tagging using support vector ma- chines. In Proceedings of SENSEVAL-2 Second In- ternational Workshop on Evaluating Word Sense Dis- ambiguation Systems, pages 59-62, Toulouse, France, July. Association for Computational Linguistics.
MWUaware part-of-speech tagging with a CRF model and lexical resources. Matthieu Constant, Anthony Sigogne, Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World. the Workshop on Multiword Expressions: from Parsing and Generation to the Real WorldPortland, Oregon, USAAssociation for Computational LinguisticsMatthieu Constant and Anthony Sigogne. 2011. MWU- aware part-of-speech tagging with a CRF model and lexical resources. In Proceedings of the Workshop on Multiword Expressions: from Parsing and Gener- ation to the Real World, pages 49-56, Portland, Ore- gon, USA, June. Association for Computational Lin- guistics.
Token-based MWE identification strategies in the mwetoolkit. Silvio Ricardo Cordeiro, Carlos Ramisch, Aline Villavicencio, PARSEME's 4th general meeting. Silvio Ricardo Cordeiro, Carlos Ramisch, and Aline Villavicencio. 2015. Token-based MWE identifica- tion strategies in the mwetoolkit. In PARSEME's 4th general meeting.
Verb noun construction MWE token classification. Mona Diab, Pravin Bhutada, Proceedings of the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and Applications. the Workshop on Multiword Expressions: Identification, Interpretation, Disambiguation and ApplicationsSingapore, AugustAssociation for Computational LinguisticsMona Diab and Pravin Bhutada. 2009. Verb noun con- struction MWE token classification. In Proceedings of the Workshop on Multiword Expressions: Identifi- cation, Interpretation, Disambiguation and Applica- tions, pages 17-22, Singapore, August. Association for Computational Linguistics.
Unsupervised type and token identification of idiomatic expressions. Afsaneh Fazly, Paul Cook, Suzanne Stevenson, Computational Linguistics. 351Afsaneh Fazly, Paul Cook, and Suzanne Stevenson. 2009. Unsupervised type and token identification of idiomatic expressions. Computational Linguistics, 35(1):61-103.
Unsupervised word sense induction using distributional statistics. Kartik Goyal, Eduard Hovy, Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers. COLING 2014, the 25th International Conference on Computational Linguistics: Technical PapersDublin, IrelandDublin City University and Association for Computational LinguisticsKartik Goyal and Eduard Hovy. 2014. Unsupervised word sense induction using distributional statistics. In Proceedings of COLING 2014, the 25th International Conference on Computational Linguistics: Technical Papers, pages 1302-1310, Dublin, Ireland, August. Dublin City University and Association for Computa- tional Linguistics.
jMWE: A java toolkit for detecting multi-word expressions. Nidhi Kulkarni, Mark Alan Finlayson, Proceedings of the Workshop on Multiword Expressions: from Parsing and Generation to the Real World. the Workshop on Multiword Expressions: from Parsing and Generation to the Real WorldAssociation for Computational LinguisticsNidhi Kulkarni and Mark Alan Finlayson. 2011. jMWE: A java toolkit for detecting multi-word expressions. In Proceedings of the Workshop on Multiword Expres- sions: from Parsing and Generation to the Real World, pages 122-124. Association for Computational Lin- guistics.
Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. Michael Lesk, Proceedings of the Fifth International Conference on Systems Documentation (SIGDOC 86). the Fifth International Conference on Systems Documentation (SIGDOC 86)Toronto, CanadaMichael Lesk. 1986. Automatic sense disambiguation using machine readable dictionaries: How to tell a pine cone from an ice cream cone. In Proceedings of the Fifth International Conference on Systems Documen- tation (SIGDOC 86), pages 24-26, Toronto, Canada.
Unsupervised acquisition of predominant word senses. Diana Mccarthy, Rob Koeling, Julie Weeds, John Carroll, Computational Linguistics. 4Diana McCarthy, Rob Koeling, Julie Weeds, and John Carroll. 2007. Unsupervised acquisition of pre- dominant word senses. Computational Linguistics, (4):553-590, dec.
Joint dependency parsing and multiword expression tokenization. Alexis Nasr, Carlos Ramisch, José Deulofeu, André Valli, Proceedings of the 53rd. the 53rdAlexis Nasr, Carlos Ramisch, José Deulofeu, and André Valli. 2015. Joint dependency parsing and multiword expression tokenization. In Proceedings of the 53rd
Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing. Beijing, ChinaAssociation for Computational Linguistics1Long Papers)Annual Meeting of the Association for Computational Linguistics and the 7th International Joint Conference on Natural Language Processing (Volume 1: Long Pa- pers), pages 1116-1126, Beijing, China, July. Associ- ation for Computational Linguistics.
Word sense disambiguation: A survey. Roberto Navigli, 10:1-10:69ACM Comput. Surv. 412Roberto Navigli. 2009. Word sense disambiguation: A survey. ACM Comput. Surv., 41(2):10:1-10:69.
A system for large-scale acquisition of verbal, nominal and adjectival subcategorization frames from corpora. Judita Preiss, Ted Briscoe, Anna Korhonen, Proceedings of the 45th Annual Meeting of the Association of Computational Linguistics. the 45th Annual Meeting of the Association of Computational LinguisticsPrague, Czech RepublicAssociation for Computational LinguisticsJudita Preiss, Ted Briscoe, and Anna Korhonen. 2007. A system for large-scale acquisition of verbal, nominal and adjectival subcategorization frames from corpora. In Proceedings of the 45th Annual Meeting of the Asso- ciation of Computational Linguistics, pages 912-919, Prague, Czech Republic, June. Association for Com- putational Linguistics.
An Evaluation of Methods for the Extraction of Multiword Expressions. Carlos Ramisch, Paulo Schreiner, Marco Idiart, Aline Villavicencio, Proceedings of the LREC Workshop Towards a Shared Task for Multiword Expressions (MWE 2008). the LREC Workshop Towards a Shared Task for Multiword Expressions (MWE 2008)Marrakech, MoroccoCarlos Ramisch, Paulo Schreiner, Marco Idiart, and Aline Villavicencio. 2008. An Evaluation of Methods for the Extraction of Multiword Expressions. In Proceed- ings of the LREC Workshop Towards a Shared Task for Multiword Expressions (MWE 2008), pages 50-53, Marrakech, Morocco, June.
Multiword Expressions Acquisition: A Generic and Open Framework, volume XIV of Theory and Applications of Natural Language Processing. Carlos Ramisch, Springer230Carlos Ramisch. 2015. Multiword Expressions Acquisi- tion: A Generic and Open Framework, volume XIV of Theory and Applications of Natural Language Pro- cessing. Springer. 230 p.
A word embedding approach to predicting the compositionality of multiword expressions. Bahar Salehi, Paul Cook, Timothy Baldwin, Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesDenver, ColoradoAssociation for Computational LinguisticsBahar Salehi, Paul Cook, and Timothy Baldwin. 2015. A word embedding approach to predicting the composi- tionality of multiword expressions. In Proceedings of the 2015 Conference of the North American Chapter of the Association for Computational Linguistics: Hu- man Language Technologies, pages 977-983, Denver, Colorado, May-June. Association for Computational Linguistics.
Multiflex: A Multilingual Finite-State Tool for Multi-Word Units. Agata Savary, Lecture Notes in Computer Science. Sebastian Maneth5642SpringerAgata Savary. 2009. Multiflex: A Multilingual Finite- State Tool for Multi-Word Units. In Sebastian Maneth, editor, CIAA, volume 5642 of Lecture Notes in Com- puter Science, pages 237-240. Springer.
Discriminative lexical semantic segmentation with gaps: running the MWE gamut. Nathan Schneider, Emily Danchik, Chris Dyer, Noah A Smith, Transactions of the Association for Computational Linguistics. 2Nathan Schneider, Emily Danchik, Chris Dyer, and Noah A. Smith. 2014. Discriminative lexical se- mantic segmentation with gaps: running the MWE gamut. Transactions of the Association for Compu- tational Linguistics, 2:193-206, April.
Detecting Minimal Semantic Units and their Meanings (DiMSUM). Nathan Schneider, Dirk Hovy, Anders Johannsen, Marine Carpuat, Proc. of SemEval. of SemEvalSan Diego, California, USA10Nathan Schneider, Dirk Hovy, Anders Johannsen, and Marine Carpuat. 2016. SemEval 2016 Task 10: De- tecting Minimal Semantic Units and their Meanings (DiMSUM). In Proc. of SemEval, San Diego, Califor- nia, USA, June.
Collins COBUILD phrasal verbs dictionary. John SinclairHarper Collins528Glasgow, UKthird editionJohn Sinclair, editor. 2012. Collins COBUILD phrasal verbs dictionary. Harper Collins, Glasgow, UK, third edition. 528 p.
Disambiguating japanese compound verbs. Kiyoko Uchiyama, Timothy Baldwin, Shun Ishizaki, Computer Speech and Language. 194Kiyoko Uchiyama, Timothy Baldwin, and Shun Ishizaki. 2005. Disambiguating japanese compound verbs. Computer Speech and Language, 19(4):497 -512. |
17,919,789 | Improvement of Statistical Machine Translation using Charater- Based Segmentation with Monolingual and Bilingual Information | We present a novel segmentation approach for Phrase-Based Statistical Machine Translation (PB-SMT) to languages where word boundaries are not obviously marked by using both monolingual and bilingual information and demonstrate that (1) unsegmented corpus is able to provide the nearly identical result compares to manually segmented corpus in PB-SMT task when a good heuristic character clustering algorithm is applied on it, (2) the performance of PB-SMT task has significantly increased when bilingual information are used on top of monolingual segmented result. Our technique, instead of focusing on word separation, mainly concentrate on a group of character. First, we group several characters that reside in an unsegmented corpus by employing predetermined constraints and certain heuristics algorithms. Secondly, we enhance the segmented result by incorporating the character group repacking based on alignment confidence. We evaluate the effectiveness of our method on PB-SMT task using English-Thai, English-Lao and English-Burmese language pairs and report the best improvement of 8.1% increase in BLEU score on English-Thai pair. | [
2420674,
549380,
2913101,
8884845,
5219389
] | Improvement of Statistical Machine Translation using Charater- Based Segmentation with Monolingual and Bilingual Information
Vipas Sutantayawalee vipas.sutantayawalee@nectec.or.th
National Electronics and Computer Technology Center
Thailand
Peerachet Porkeaw peerachet.porkeaw@nectec.or.th
National Electronics and Computer Technology Center
Thailand
Prachya Boonkwan prachya.boonkwan@nectec.or.th
National Electronics and Computer Technology Center
Thailand
Sitthaa Phaholphinyo sitthaa.phaholphinyo@nectec.or.th
National Electronics and Computer Technology Center
Thailand
Thepchai Supnithi thepchai@nectec.or.th
National Electronics and Computer Technology Center
Thailand
Improvement of Statistical Machine Translation using Charater- Based Segmentation with Monolingual and Bilingual Information
PACLIC 28 ! 145
We present a novel segmentation approach for Phrase-Based Statistical Machine Translation (PB-SMT) to languages where word boundaries are not obviously marked by using both monolingual and bilingual information and demonstrate that (1) unsegmented corpus is able to provide the nearly identical result compares to manually segmented corpus in PB-SMT task when a good heuristic character clustering algorithm is applied on it, (2) the performance of PB-SMT task has significantly increased when bilingual information are used on top of monolingual segmented result. Our technique, instead of focusing on word separation, mainly concentrate on a group of character. First, we group several characters that reside in an unsegmented corpus by employing predetermined constraints and certain heuristics algorithms. Secondly, we enhance the segmented result by incorporating the character group repacking based on alignment confidence. We evaluate the effectiveness of our method on PB-SMT task using English-Thai, English-Lao and English-Burmese language pairs and report the best improvement of 8.1% increase in BLEU score on English-Thai pair.
Introduction
Word segmentation is a crucial part of Statistical Machine Translation (SMT) especially for the languages where there are no explicit word boundaries such as Chinese, Japanese, and Thai. The writing systems of these languages allow each word to be written consecutively without spaces between words. The issue of word boundary ambiguities arises if word boundary is misplaced, resulting in an incorrect translation. An effective word segmentator therefore becomes a crucial pre-processing step of SMT. Word segmentators which focusing on word which focusing on word, character [1] or both [2] and [3] have been implemented to accomplish this goal.
Most of word segmentators are supervised; i.e. they require a monolingual corpus of a voluminous size. Various approaches are employed, such as dictionary-based, Hidden Markov model (HMM), support vector machine (SVM), and conditional random field (CRF). Although, a number of segementators offer promising results, certain of them might be unsuitable for SMT task due to the influence of segmentation scheme [4]. Therefore, instead of solely rely on monolingual corpus, the use of a bilingual corpus as an guideline for word segmentation in improving the performance of SMT system has become of increasing interest [4] [5].
In this paper, we propose a novel segmentation approach for Phrase-Based Statistical Machine Translation (PB-SMT) to languages where word boundaries are not obviously marked by using both monolingual and bilingual information on English-Thai, English-Burmese and English-Lao language pairs and demonstrate that (1) unsegmented corpus is able to provide the nearly identical result to manually segmented corpus in PB-SMT task when the good heuristics character clustering algorithm is applied on it, (2) the performance of PB-SMT task has significantly increased when bilingual information are used on top of monolingual segmented result. Our technique, instead of focusing on word separation, mainly concentrate on a group of character. First, we group several characters that reside in an un-
PACLIC 28
! 146 segmented monolingual corpus by employing predetermined constraints and certain heuristics algorithms. Secondly, we enhance the segmented result by incorporating the bilingual information which are character cluster alignment, CC co-occurrence frequency and alignment confidence into that result. These two tasks can be performed repeatedly.
The remainder of this paper is organized as follows. Section 2 provides some information related to our work. Section 3 describes the methodology of our approaches. Section 4 present the experiments setting. Section 5 present the experimental results and empirical analysis. Section 6 and 7 gives a conclusion and future work respectively.
Related Work
Thai Character Grouping
In Thai writing system, there are no explicit word boundaries as in English, and a single Thai character does not have specific meanings like Chinese, Japanese and Korean. Thai characters could be consonants, vowels and tone marks and a word can be formed by combining these characters. From our observation, we found that the average length of Thai words on BEST2010 corpus (National Electronics and Computer Technology Center, Thailand 2010) is 3.855. This makes the search space of Thai word segmentation very large.
To alleviate this issue, the notion of Thai character grouping (TCC), is introduced in [1] to reduce the search space with predetermined unambiguous constraints for cluster formation. A group of character may not be meaningful and has to combine with other consecutive group to form a word. Characters in the group cannot be separated according to the Thai orthographic rules. For example, a vowel and tone mark cannot stand alone and a tone marker is always required to be placed next to a previous character only. [6] applied TCC to word segmentation technique which yields an interesting result.
Bilingual Word Segmentation
Bilingual information has also been shown beneficial for word segmentation. Several methods use this kind of information from bilingual corpora to improve word segmentation. [5] uses an unsegmented bilingual corpus and builds a self-learned dictionary using alignment statistics between English and Chinese language pair. [4] is based on the manually segmented bilingual corpus and then try to "repack" words from existing alignment by using alignment confidence. Both approaches evaluate the performance in term of translation improvement and report the promising results of PB-SMT task.
Methodology
This paper aim to compare translation quality based on SMT task between the systems trained on bilingual corpus that contains both segmented source and target, and on the same bilingual corpus with segmented source but unsegmented target. First, we make use of monolingual information by employing several character cluster algorithms on unsegmented data. Second, we use bilingual-guided alignment information retrieved from alignment extraction process for improving character cluster segmentation. Then, we evaluate our performance based on translation accuracy by using BLEU metric. We want to prove that (1) the result of PB-SMT task using unsegmented corpus (unsupervised) is nearly identical result to manually segmented (supervised) data and (2) when bilingual information are also applied, the performance of PB-SMT is also improved.
Notation
Given a target { ℎ } sentence 1 consisting of clusters { 1 , … , }, where | | ≥ 1. If | | = 1, we call as a single character . Otherwise, we call it as a character group . In addition, given an English sentence 1 consisting of words { , … , }, → denotes a set of Englishto-Target language word alignments between 1 and 1 . In addition, since we concentrated on oneto-many alignments, → , can be rewritten as a set of pairs and = < , > noting a link between one single English word and several Thai characters that are formed to one character group
Monolingual Information
Due to the issue mentioned in section 2.1, we apply character grouping technique (CC) on target text in order to reduce the search space. After performing CC, it will yield several character group which can be merged together to obtain a larger unit which approaches the notion of word. However, for Thai, we do not only receive but also which usually has no meaning by itself. Moreover, Thai, Burmese and Lao writing rule does not allow to stand alone in most case. Thus, we are required to develop various adapted versions of CC by using a pre-defined word list that can be grouped as a word confirmed by linguists (orthographic insight)) to automatically pack the characters to become a new . In addition, all of single consonants in Thai Burmese, and Lao are forced to group with either left or right cluster due to their writing rules. This decision has been made by consulting character co-occurrence statistics (heuristic algorithm)
Eventually, we obtain several character group alignments from the system trained on various CC approaches which effect to translation quality as shown in section 5.1
Bilingually-Guided Alignment Information
We begin with the sequence of small clusters resulting from previous character grouping process. These small can be merged together in order to form "word" using bilingually-guided alignment information. Generally, small consecutive in target side which are aligned to the same word in source data should be merged together to obtain a larger unit. Therefore, this section describes our one-to-many alignment extraction process.
For one-to-many alignment, we applied processes similar to those in phrase extraction algorithm [7] which is described as follows.
With English sentence 1 and a character cluster , we apply IBM model 1-5 to extract word-to-cluster translation probability of sourceto-target ( | ) and target-to-source ( | ) . Next, the alignment points which have the highest probability are greedily selected from both ( | ) and ( | ). Finally, we select consecutive which are aligned to the same English word as candidates. From the Figure 1.d, we obtain these candidates (red, สี แดง) and (bicycle, จั ก ร ยา น).
Character Group Repacking (CCR)
Although the alignment information obtained from the previous step is very helpful for the PB-SMT task. There are certain misaligned alignments that need to be corrected. As shown in Figure 2, one English word is aligned with Thai characters { 1 , … , } by previous step aligner but actually this word must align with { 1 , … , +2 }. Word repacking [4] is a one approach that can efficiently resolve this issue. However, in this paper, we slightly modified repacking technique by performing a character group repacking (CCR) instead of word. The main purpose of repacking technique is to group all small consecutive in target side that frequently align with a single word in source data . Repacking approaches uses two simple calculations which are a co-occurrence frequency ( ( , )) and alignment confidence ( ( )). ( ( , )) is the number of times and co-occurrence in the bilingual corpus [4] [9] and ( ) is a measure of how often the aligner aligns and when they co-occur. ( ) is defined as
( ) = ( ) ( , )
where ( ) denotes the number of alignments suggested by the previous-step word aligner.
Unfortunately, due to the limited memory in our experiment machine, we cannot find ( , ) ) for all possible < , > pairs. We, therefore, slightly modified the above equation by finding ( ) first. Secondly, we
PACLIC 28
! 148 begin searching ( , )) from all possible alignments in instead of finding all occurrences in corpus. By applying this modification, we eliminate < , > pairs that co-occur together but never align to each other by previous-step aligner ( ( ) equals to zero) so as to reduce the search space and complexity in our algorithm. Thirdly, we choose with highest ( ) and repack all in target side to be a new single unit. This process can be done repeatedly. However, we have run this task less than twice since there are few new groups of character appear after two iterations have passed.
Data
We conduct our experiment based on two bilingual corpora. One is an English-to-Thai corpus (650K corpus) which is constructed from several sources and consists of multiple domains (e.g. news, travel, article, entertainment, computer, etc.). While another one is English-to-Multiple language corpus (20K corpus) which focuses on travel domain only and is developed from several English sentences and those sentences are manually translated to Thai, Burmese and Lao by linguists. Table 1 shows the information on these two corpora. Note that Test set #2 is manually segmented with a guideline different than test#1.
Tools and Evaluation
We evaluate our system in terms of translation quality based on phrase-based SMT. Source sentences are sequence of English words while target sentences are sequences of in Thai, Burmese and Lao. Each 's length depends on which approach are used in the experiment. Translation model and language model are train based on the standard phrase-based SMT. Alignments of source (English word) and target (Thai, Burmese and Lao character cluster) are extracted using GIZA++ [8] and the phrase extraction algorithm [7] is applied using Moses SMT package. We apply SRILM [10] to train the 3gram language model of target side. We use the default parameter settings for decoding.
In testing process, we use dataset that not reside in training data. Then we compared the translation result with the reference in terms of BLEU score instead of F-score because it is cumbersome to construct a reliable gold standard since their annotation schemes are different. Therefore, we resegment the reference data (manually segmented data) and the translation result data based on character grouping techniques. Some may concern about using character group instead of word will lead to over estimation (higher than actual) due to the BLEU score is design based on word and not based on character cluster. However, we used this BLEU score only for comparing translation quality among our experiments. Comparing to other SMT systems still require running BLEU score based on the same segmentation guideline. Test #2 500 -
Results and Discussion
We conducted all experiments on PB-SMT task and reported the performance of PB-SMT system based on the BLEU measure.
Monolingual Information
English -Thai language pair
First, we use a method proposed in Figure 3.(a) in order to receive translation results. Table 2 shows the number of Thai character clusters in 650K corpus that are decreasing over time when several different character clustering approaches are applied. Table 3. The performance of SMT trained with different character grouping algorithm.
As seen from Table 3, the BLEU scores of EN-TH pair in all corpora are increasing over time and almost equal to original result on Test#2 in 650K corpus. This is because each CC tends to merge to become larger and larger unit, which approaches the notion of word in eventually. In addition, these experiments also support the claim (1) that unsegmented corpus is able to provide the nearly identical result compares to upper bound in PB-SMT task when a good heuristic character grouping algorithm is applied on it. However, since CC does not rely on semantic knowledge. Therefore, there are chances that certain do not give a meaningful word resulting in incorrect translation on SMT task.
Preliminary experiment on low resource language (LRL)
We also conduct the experiment on LRL by choosing Lao and Burmese by imitating TCC to be Lao Character Clustering (LCC) and Burmese Character Clustering (BCC) for Lao and Burmese respectively with the same method as in section 5. As seen in Table 4, the BLEU scores of CC are almost equal to original results. In English-Burmese pair, however, the character grouping algorithm is able to yield a better performance on upper bound data. We suspect that Burmese word
PACLIC 28
! 150 segmentation guideline is still unstable resulting in misplaced word boundaries.
Bilingually-Guided Alignment Information
As mention earlier in section 3.4, we can improve the translation result by making use of alignment information from previous translation process. Therefore, we perform experiments by using a method describe in Figure 3.(b) in order to receive another translation result set. However, since the corpus size has the direct impact on translation result. We test our hypothesis on the 650K corpus only.
(a.) Test #1 of En-TH 650K corpus (b.) Test #2 of En-TH 650K corpus Table 5. BLEU score of each character clustering method (a and b) and the percentage of the improvement when we applied CCR to the data As shown in Table 4 and Figure 4, when CCR have been deployed on each training dataset, the results of BLEU increase in the same manner with Without CCR method. It proves the claim (2) that the performance of PB-SMT task has significantly increased when bilingual information are used on top of monolingual segmented result. In addition, there are certain significant points that should be noticed. First, CCR method is able to yield maximum of 8.1 % BLEU score increase. Second, when we apply the CCR methods and reach at some point, few improvement or minor degradation is received as shown in CC-FN-B without and with CCR result. This is because the number of clusters produced by this character grouping algorithm is almost equal to number of words in threshold as shown in Table 2
Conclusion
In this paper, we introduce a new approach for performing word segmentation task for SMT. Instead of starting at word level, we focus on character group because this approach can perform on unsegmented corpus or manually segmented corpus that have multiple segmentation guideline. To begin, we apply several adapted versions of CC on unsegmented corpus. Next, we use a bilingual corpus to find alignment information for all < , > pairs. Then, we employ character group repacking method in order to form the larger cluster of .
We evaluate our approach on translation task based on several sources and different domain of corpus and report the result in BLEU metric. Our technique demonstrates that (1) we can achieve a dramatically improvement of BLUE as of 8.1% when we apply CC with CCR and (2) it is possible to overcome the translation result of manually segmented corpus by using CC-FN-B with CCR.
Future Work
There are some tasks that can be added into this approaches. Firstly, we can make use of trigram (and n-gram) statistics, maximum entropy or conditional random field on heuristic algorithm in enhanced version of CC. Secondly, we can apply our approaches on Bilinugal corpus which both source and target side are not segmented. Thirdly, we can modify CCR process to be able to re-rank the alignment confidence by using discriminative approach. Lastly, name entity recognition system can be integrated with our approach in order to improve the SMT performance.
Figure 1 .Figure 1 .
11a and 1.b show examples of alignment points of source-to-target and target-tosource respectively. After that we selected the intersection of alignment pairs from both side. Then, additional alignment points are added according to the growing heuristic algorithm (grow additional alignment points, The process of one-to-many alignment extraction (a) Source-to-Target word alignment (b) Target-to-Source word alignment (c) Intersection between (a) and (b). (d) Result of (c) after applying the growing heuristic algorithm.
Figure 2 .
2A case that previous aligner misaligned certain clusters ( 4 ) despite the fact that 4 are often co-
Figure 3 .
3Experiment flows: (a) Monolingual Information (b) Bilingually-Guided Alignment Information
Figure 4 .
4The BLEU score of (a) test set no.1 and (b) test set no.2
Table 1 .
1No. of sentence pairs in each data set of bilingual corpora
Table 2 .
2Number of Thai character group on 650K corpus when different character clustering approaches are applied.Approaches
650K corpus
20K cor-
pus
Test #1
Without
CCR
Test #2
Without
CCR
EN-TH
CC
37.12
36.78
47.63
CC-FN
40.23
38.36
49.21
CC-FN-B
44.69
40.45
49.21
Threshold
47.04
40.73
49.56
. However, this approach might suffer from the word boundary misplacement problem. Third, character grouping that use CC with orthographic insight and heuristic algorithm combined with CCR approach (CC-FN-B with CCR) is able to beat the threshold translation result in test set #2 for the first time.35
36
37
38
39
40
41
42
43
44
45
46
47
48
CC
CC-FN
CC-FN-B
No CCR
With CCR
Threshold
35
36
37
38
39
40
41
42
CC
CC-FN
CC-FN-B
No CCR
With CCR
Threshold
Test #2
% of BLEU
Improvement
Approaches
Without
CCR
With CCR
CC
36.78
38.87
5.68
CC-FN
38.36
39.09
1.90
CC-FN-B
40.45
40.81
0.89
Threshold
40.73
N/A
N/A
Test #1
% of BLEU
Improve-
ment.
Approaches
Without
CCR
With CCR
CC
37.12
40.13
8.11
CC-FN
40.23
41.90
4.15
CC-FN-B
44.69
44.43
-0.58
Threshold
47.04
N/A
N/A
Character cluster based Thai information retrieval. T Teeramunkong, V Sornlertlamvanich, T Tanhermhong, W Chinnan, IRAL '00 Proceedings of the fifth international workshop on on Information retrieval with Asian languages. T. Teeramunkong, V. Sornlertlamvanich, T. Tanhermhong and W. Chinnan, "Character cluster based Thai information retrieval," in IRAL '00 Proceedings of the fifth international workshop on on Information retrieval with Asian languages, 2000.
A Word and Character-Cluster Hybrid Model for Thai Word Segmentation. C Kruengkrai, K Uchimoto, J Kazama, K Torisawa, H Isahara, C Jaruskulchai, Eighth International Symposium on Natural Lanugage Processing. Bangkok, ThailandC. Kruengkrai, K. Uchimoto, J. Kazama, K. Torisawa, H. Isahara and C. Jaruskulchai, "A Word and Character-Cluster Hybrid Model for Thai Word Segmentation," in Eighth International Symposium on Natural Lanugage Processing, Bangkok, Thailand, 2009.
Enhancing Chinese Word Segmentation with Character Clustering. Y Liu, W Che, T Liu, Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data. ChinaY. Liu, W. Che and T. Liu, "Enhancing Chinese Word Segmentation with Character Clustering," in Chinese Computational Linguistics and Natural Language Processing Based on Naturally Annotated Big Data, China, 2013.
Bilingually motivated domain-adapted word segmentation for statistical machine translation. Y Ma, A Way, Proceeding EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics. eeding EACL '09 eedings of the 12th Conference of the European Chapter of the Association for Computational LinguisticsStroudsburg, PA, USAY. Ma and A. Way, "Bilingually motivated domain-adapted word segmentation for statistical machine translation," in Proceeding EACL '09 Proceedings of the 12th Conference of the European Chapter of the Association for Computational Linguistics, pp. 549-557, Stroudsburg, PA, USA, 2009.
Do We Need Chinese Word Segmentation for Statistical Machine Translation?. J Xu, R Zens, H Ney, ACL SIGHAN Workshop. J. Xu, R. Zens and H. Ney, "Do We Need Chinese Word Segmentation for Statistical Machine Translation?," ACL SIGHAN Workshop 2004, pp. 122-129, 2004.
Thai Word Segmentation based-on GLR Parsing Technique and Word Ngram Model. P Limcharoen, C Nattee, T Theeramunkong, Eighth International Symposium on Natural Lanugage Processing. Bangkok, ThailandP. Limcharoen, C. Nattee and T. Theeramunkong, "Thai Word Segmentation based-on GLR Parsing Technique and Word N- gram Model," in Eighth International Symposium on Natural Lanugage Processing, Bangkok, Thailand, 2009.
Statistical phrase-based translation. P Koehn, F J Och, D Marcu, NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. Stroudsburg, PA, USAP. Koehn, F. J. Och and D. Marcu, "Statistical phrase-based translation," in NAACL '03 Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, Stroudsburg, PA, USA, 2003.
A systematic comparison of various statistical alignment models. F J Och, H Ney, Computational Linguistics. 291F. J. Och and H. Ney, "A systematic comparison of various statistical alignment models," Computational Linguistics, vol. 29, no. 1, pp. 19-51, 2003.
Models of translational equivalence among words. I D Melamed, Computational Linguistics. 262I. D. Melamed, "Models of translational equivalence among words," Computational Linguistics, vol. 26, no. 2, pp. 221-249, 2000.
SRILM --An extensible language modeling toolkit. Proceeding of the International Conference on Spoken Language Processing. eeding of the International Conference on Spoken Language essing"SRILM --An extensible language modeling toolkit," in Proceeding of the International Conference on Spoken Language Processing, 2002. |
6,992,160 | MixedEmotions: Social Semantic Emotion Analysis for Innovative Multilingual Big Data Analytics Markets ICT-15-2014 Big data and Open Data Innovation and take-up -Innovation Action Industry Partners Academic Partners | Emotion analysis is central to tracking customer and user behaviour and satisfaction, which can be observed from user interaction in the form of explicit feedback through email, call centre interaction, social media comments, etc., as well as implicit acknowledgment of approval or rejection through facial expressions, speech or other non-verbal feedback. In Europe specifically, but increasingly also globally, an added factor here is that user feedback can be in multiple languages, in text as well as in speech and audio-visual content. This implies different cultural backgrounds and thus different ways to produce and perceive emotions in everyday interactions, beyond the fact of having specific rules for encoding and decoding emotions in each language.Making sense of accumulated user interaction from different ('mixed') data sources, modalities and languages is challenging and has not yet been explored in fullness in an industrial context. Commercial solutions exist but do not address the multilingual aspect in a robust and large-scale setting and do not scale up to huge data volumes that need to be processed, or the integration of emotion analysis observations across data sources and/or modalities on a meaningful level, i.e. keeping track of entities involved as well the connections between them (who said what? to whom? in the context of which event, product, service?)The MixedEmotions project will implement an integrated Big Linked Data platform for emotion analysis across heterogeneous data sources, languages and modalities, building on existing state-ofthe-art tools, services and approaches that will enable the tracking of emotional aspects of user interaction and feedback on an entity level. The platform will provide an integrated solution for:Large-scale emotion analysis and fusion on heterogeneous, multilingual, text, speech, video and social media data streams, leveraging open access and proprietary data sources, exploiting also social context by leveraging social network graphs Semantic-level emotion information aggregation and integration through robust extraction of social semantic knowledge graphs for emotion analysis along multidimensional clustersThe platform will be developed and evaluated in the context of three cross-domain pilot projects that are representative of a variety of data analytics markets: Social TV, Brand Reputation Management, Call Centre Operations.211 | [] | MixedEmotions: Social Semantic Emotion Analysis for Innovative Multilingual Big Data Analytics Markets ICT-15-2014 Big data and Open Data Innovation and take-up -Innovation Action Industry Partners Academic Partners
Paradigma Tecnológico
Spain
Italy Insight Centre for Data Analytics
National University of Ireland
Galway (coordinator) Millward BrownCzech Republic
Universidad Politécnica de Madrid
Spain Phonexia
Czech Republic University of Passau
Germany
SindiceTech
Ireland
Brno University of Technology
Czech Republic Deutsche Welle
Germany
MixedEmotions: Social Semantic Emotion Analysis for Innovative Multilingual Big Data Analytics Markets ICT-15-2014 Big data and Open Data Innovation and take-up -Innovation Action Industry Partners Academic Partners
Project duration: April 2015 -March 2017
Emotion analysis is central to tracking customer and user behaviour and satisfaction, which can be observed from user interaction in the form of explicit feedback through email, call centre interaction, social media comments, etc., as well as implicit acknowledgment of approval or rejection through facial expressions, speech or other non-verbal feedback. In Europe specifically, but increasingly also globally, an added factor here is that user feedback can be in multiple languages, in text as well as in speech and audio-visual content. This implies different cultural backgrounds and thus different ways to produce and perceive emotions in everyday interactions, beyond the fact of having specific rules for encoding and decoding emotions in each language.Making sense of accumulated user interaction from different ('mixed') data sources, modalities and languages is challenging and has not yet been explored in fullness in an industrial context. Commercial solutions exist but do not address the multilingual aspect in a robust and large-scale setting and do not scale up to huge data volumes that need to be processed, or the integration of emotion analysis observations across data sources and/or modalities on a meaningful level, i.e. keeping track of entities involved as well the connections between them (who said what? to whom? in the context of which event, product, service?)The MixedEmotions project will implement an integrated Big Linked Data platform for emotion analysis across heterogeneous data sources, languages and modalities, building on existing state-ofthe-art tools, services and approaches that will enable the tracking of emotional aspects of user interaction and feedback on an entity level. The platform will provide an integrated solution for:Large-scale emotion analysis and fusion on heterogeneous, multilingual, text, speech, video and social media data streams, leveraging open access and proprietary data sources, exploiting also social context by leveraging social network graphs Semantic-level emotion information aggregation and integration through robust extraction of social semantic knowledge graphs for emotion analysis along multidimensional clustersThe platform will be developed and evaluated in the context of three cross-domain pilot projects that are representative of a variety of data analytics markets: Social TV, Brand Reputation Management, Call Centre Operations.211
Project duration: April 2015 -March 2017 Summary
Emotion analysis is central to tracking customer and user behaviour and satisfaction, which can be observed from user interaction in the form of explicit feedback through email, call centre interaction, social media comments, etc., as well as implicit acknowledgment of approval or rejection through facial expressions, speech or other non-verbal feedback. In Europe specifically, but increasingly also globally, an added factor here is that user feedback can be in multiple languages, in text as well as in speech and audio-visual content. This implies different cultural backgrounds and thus different ways to produce and perceive emotions in everyday interactions, beyond the fact of having specific rules for encoding and decoding emotions in each language.
Making sense of accumulated user interaction from different ('mixed') data sources, modalities and languages is challenging and has not yet been explored in fullness in an industrial context. Commercial solutions exist but do not address the multilingual aspect in a robust and large-scale setting and do not scale up to huge data volumes that need to be processed, or the integration of emotion analysis observations across data sources and/or modalities on a meaningful level, i.e. keeping track of entities involved as well the connections between them (who said what? to whom? in the context of which event, product, service?)
The MixedEmotions project will implement an integrated Big Linked Data platform for emotion analysis across heterogeneous data sources, languages and modalities, building on existing state-ofthe-art tools, services and approaches that will enable the tracking of emotional aspects of user interaction and feedback on an entity level. The platform will provide an integrated solution for:
Large-scale emotion analysis and fusion on heterogeneous, multilingual, text, speech, video and social media data streams, leveraging open access and proprietary data sources, exploiting also social context by leveraging social network graphs Semantic-level emotion information aggregation and integration through robust extraction of social semantic knowledge graphs for emotion analysis along multidimensional clusters
The platform will be developed and evaluated in the context of three cross-domain pilot projects that are representative of a variety of data analytics markets: Social TV, Brand Reputation Management, Call Centre Operations. |
6,738,284 | Control Verbs, Argument Cluster Coordination and MCTAG | In this paper 1 we present an extension of MC-TAGs with Local Shared Derivation (Seddah, 2008) which can handle non local elliptic coordinations. Based on a model for control verbs that makes use of so-called ghost trees, we show how this extension leads to an analysis of argument cluster coordinations that provides an adequate derivation graph. This is made possible by an original interpretation of the MCTAG derivation tree mixing the views ofKallmeyer (2005)and Weir (1988). | [
216848664,
216804363,
28816477,
12123126
] | Control Verbs, Argument Cluster Coordination and MCTAG
Djamé Seddah djame.seddah@paris-sorbonne.fr
Benoit Sagot benoit.sagot@inria.fr
Laurence Danlos laurence.danlos@linguist.jussieu.fr
Alpage & Univ. Paris-Sorbonne Paris
Alpage, Inria ParisFrance, France
Alpage & Univ
Paris 7 ParisFrance
Control Verbs, Argument Cluster Coordination and MCTAG
In this paper 1 we present an extension of MC-TAGs with Local Shared Derivation (Seddah, 2008) which can handle non local elliptic coordinations. Based on a model for control verbs that makes use of so-called ghost trees, we show how this extension leads to an analysis of argument cluster coordinations that provides an adequate derivation graph. This is made possible by an original interpretation of the MCTAG derivation tree mixing the views ofKallmeyer (2005)and Weir (1988).
Introduction
Elliptic coordinate structures are a challenge for most constituent-based syntactic theories. To model such complex phenomena, many works have argued in favor of factorized syntactic structures (Maxwell and Manning, 1996), while others have argued for distributive structures that include a certain amount of non-lexically realized elements (Beavers and Sag, 2004). Of course, the boundary between those two approaches is not sharp since one can decide to first build a factorized syntactic analysis and then construct a more distributive structure (e.g., logical or functional).
So far, the Combinatorial Categorial Grammar (CCG) framework (Steedman, 2001) is considered as one of the most elegant theories in accounting for coordination. Indeed, the CCG syntactic layer, which is closely tied to an syntax-semantic interface handled in a lexicalized way, permits the coordination of nonstandard constituents that cause a nontrivial challenge for other frameworks. On the other hand, some phenomena such as coordination of unlike categories are still a challenge for theories based on strict atomic category coordination.
In the broader context of ellipsis resolution, Dalrymple et al. (1991) propose to consider elided elements as free logical variables resolved using Higher Order Unification as the solving operation. Inspired by this approach and assuming that non-constituent coordination can be analyzed with ellipsis (Beavers and Sag, 2004), 2 we consider elliptic coordination as involving parallel structures where all non lexically realized syntactic elements must be represented in a derivation structure. This path was also followed by Seddah (2008) who proposed to use the ability of Multi Component TAGs (MCTAGs) (Weir, 1988) to model such a parallelism by including conjunct trees in a same tree set. This simple proposal allows for a straightforward analysis of gapping constructions. The coverage of this account is then extended by introducing links called local shared derivations which, by allowing derivations to be shared across trees of a same set, permit to handle various elliptic coordinate structures in an efficient way. This work showed that, assuming the use of regular operators to handle n-ary coordinations, a broad range of coordinate structures could be processed using a Tree-Local MCTAG-based formalism named Tree Local MCTAG with Local Shared Derivations. Nevertheless, being tied to the domain of locality of a tree set, the very nature of this mechanism forbids the sharing of derivations between different tree sets, thus preventing it from analyzing non-local elliptic coordinations.
In this paper, we introduce an extension of this model that can handle non-local elliptic coordination -close to unbounded ellipsis (Milward, 1994) -, which can be found in structures involving Figure 1: Sketch of an analysis for "Jean aime Marie et Paul Virignie"
α-aimer a) S ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ N0↓ V [aimer] N1↓ b) S c ✟ ✟ ✟ ❍ ❍ ❍ N0↓ V ε N1↓ α-et S ✟ ✟ ✟ ❍ ❍ ❍ S↓ et S c ↓ α-X N X={Jean|Marie|Paul|Virginie} α-et ✟ ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ ❍ α-aimer(a) ✟ ✟ ✟ ❍ ❍ ❍ α-Jean α-Marie α-aimer(b) ✟ ✟ ✟ ❍ ❍ ❍ α-Paul α-Virginie
The root label of α-aimer(b) is subscripted in order to avoid overgeneration cases such as *"Paul ε Virginia and John lovesi Mary". The same procedure is applied for the remaining analysis although the marks are not displayed.
control verbs and elliptic coordinations. We also show how our model can cope with argument cluster coordination and why an interpretation of the derivation tree mixing David Weir's (1988) original view of MCTAG derivation tree, where each MC set is interpreted as a unique node, and the one introduced by Laura Kallmeyer (2005), where the derivations are the ones from the underlying TAG grammar, is required to yield a derivation tree as close as possible to a proper predicate-argument structure.
Standard elliptic coordinate structures
An MCTAG account of many coordinate structures involving ellipsis has been proposed by Seddah (2008). The core idea is to use the extended MC-TAG's domain of locality to enforce a somewhat strict parallelism between coordinate structures. For example, gapping, as in (1) can be modeled, without any specific operation, by including in a same MC-Set two trees that are identical except for one thing: one is fully lexicalized whereas the other one is anchored by an empty element.
(1) Jean aime i Marie et Paul ε i Virginie John loves i Mary and Paul ε i Virginia Calling this second lexically unrealized tree a ghost tree, the missing anchor can be retrieved simply because the tree it anchors is in the same MC-Set as its ghost tree. In other words, the label of the MC-Set includes the anchor of its fully lexicalized tree. The application of this model to (1) is shown in Figure 1. Note that this account only requires the expressivity of Tree-Local MCTAGs and that unlike other approaches for gapping in the LTAG framework (Sarkar and Joshi, 1996;Seddah and Sagot, 2006;Lichte and Kallmeyer, 2010), this proposal for gapping does not require any special device or modification of the formalism itself.
In order to model derivations that involve the elision of one syntactic verbal argument as in right node raising cases (RNR) or right subject elision coordinations, the formalism is extended with oriented links, called local shared derivation (local SD), between mandatory derivation site nodes: whenever a derivation is not realized on a given node and assuming that a local SD has been defined between this node and one possible antecedent, a derivation between those nodes is inserted in the derivation structure. 3 Furthermore, if the constraint of having identical tree schema in a tree set (one being fully lexicalized and the other anchored by an empty element) is relaxed, one gets the possibility to give more flexibility to the structure parallelism enforced by this model of gapping. This is what is needed to handle coordination of unlike categories and zeugma constructions (Seddah, 2008). In the same spirit, by viewing the anchoring process as a regular derivation 4 , and hence allowing local SDs to occur on anchoring derivations as well, one can get a very flexible model allowing for trees, sharing the same tree schema but with different anchors, to be coordinated. Thus, RNRs are simply analyzed in this framework by having two identical tree schema anchored by two different verbs and with one local shared derivation occurring from the N 1 node of the right conjunct tree to the N 1 of its 3 Note that a real derivation always has precedence over a local shared one. 4 Represented, for simplicity, as a special case of substitution labeled V anchor ↓ in the relevant figure. left counterpart. Such an analysis of RNR for (2) is shown on Figure 2.
(2) Jean fabrique ε i et Marie vend [des crêpes] i John makes ε i and Mary sells pancakes i
MCTAG with Local Shared Derivations
Following Kallmeyer (2005), we define an MCTAG as a tuple G M CT AG = I, A, N, T, S , where I (resp. A) is the set of initial (resp. auxiliary) trees, N (resp. T ) the set of nonterminal (resp. terminal) labels and S the set of elementary MC-Sets. A MC-TAG with Local Shared Derivations (MCTAG-LSD) G whose underlying MCTAG is G M CT AG is defined as G = I, A, N, T, S, L , where L is the set of oriented links between two leaf nodes of two trees in a same MC-Set in S. MCTAG-LSD derivations extend derivations of the underlying MCTAG by allowing for local shared derivations, that we shall now define.
Let Γ = {γ 0 , . . . , γ n } be an MC-Set in S. Let L Γ be the set of (oriented) links in Γ, i.e. pairs of the form N L , N R where N L and N R are nodes in two different trees in Γ. Let us suppose that:
• a tree γ ′ is substituted on a node N L in a tree γ i • there exists a node N R in another tree γ j ∈ Γ such that N L , N R is in L Γ
Then, a local shared derivation can be created as follows:
• a substitution link between γ ′ and γ j is added in the derivation structure; thus, γ ′ has at least two ancestors (γ i and γ j ) in the derivation structure, which becomes a DAG instead of a tree;
• an initial tree anchored by an empty element is substituted on the node N R . 5
Note that this also applies for mandatory adjunctions, besides substitutions. Any MCTAG derivation is a valid MCTAG-LSD derivation. However, local shared derivations allow for performing additional derivation operations. Therefore, the language generated by G strictly contains the language generated by G M CT AG . However, these additional derivations can be simulated in a pure MCTAG fashion, as follows. For a given MCTAG-LSD MC-Set that contains a unique local shared derivation link, we can generate two MC-TAG MC-Sets, one that would enforce the substitution by lexicalized trees at both ends of the link, and one that would enforce the substitution of a lexicalized tree at the starting node of the link and the substitution of a ghost tree at the other end of the link. This mechanism can be generalized to MC-Sets with more than one local shared derivation. This skteches the proof that the set of languages generated by MCTAG-LSDs is the same as that generated by MCTAGs. Therefore, MCTAG-LSDs and MCTAGs have the same weak generative capacity. Moreover, these considerations still hold while restricting G M CT AG to be TL-MCTAG. Therefore, TL-MCTAG-LSDs and TL-MCTAGs have the same weak generative power. In order to cope with very large grammar size, the use of regular operators to factorize out TAG trees has been proposed by (Villemonte de La Clergerie, 2005), and has lead to a drastic reduction of the number of trees in the grammar. The resulting formalism is called factorized TAGs and was adapted by Seddah (2008) to the MCTAG-LSD framework in order to handle n-ary coordinations. The idea is to factorize MCTAG-LSD sets that have the same underlying MCTAG set (i.e. they are identical if links are ignored). Indeed, all such MC sets can be merged into one unique tree set associated with the union of all corresponding link sets. However, as with factorized TAGs, we need to add to the resulting tree set a list of constraints, R, on the construction of local shared derivations. The result is an extended formalism, called factorized MCTAG-LSD, which does not extend the expressive power of MCTAG-LSD but allows for more compact descriptions. Our resulting coordination scheme is shown on Figures 3 and Figure 4.
S ✘ ✘ ✘ ✘ ✘ ✘ ✘ ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ ❳ ❳ ❳ ❳ ❳ ❳ ❳ S↓ (',' S c ↓)* et S c ↓α-et S ✟ ✟ ✟ ❍ ❍ ❍ S↓ et S c ↓ α-N0VN1 a) S ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ N0↓ V V anchor ↓ N1↓ b) S c ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ N0↓ V V anchor ↓ N1↓ V anchor [fabriquer] V anchor [vendre]
α-X N X={Jean|Marie|des crêpes} Figure 2: Sketch of a right node raising derivation for: Jean vend ε i et Marie fabrique [des crepes] i (John makes ε i and Mary sells pancakes i ) (Seddah, 2008). Note that the tree set αN0VN1 includes all possible Local Shared Derivation links, even though only the link between the two N 0 nodes is used here.
α-N0VN1 a) S ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ N0↓ V anchor ↓ N1↓ b) S c ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ N0↓ V anchor ↓ N1↓ *
The case for Unbounded Ellipsis
The problem with this model is its heavy dependence on the domain of locality of a tree set. In fact, if creating a link between two derivation site nodes inside the same tree set is straightforward, things become complicated if the derivations that must be shared involve two nodes from different tree sets. For example, in cases involving a control verb and right-subject ellipsis such as in (3), the subject is shared among the three verbs, although the control verb elementary tree (see Figure 6) cannot be in the same tree set as the others. 6
(3) Jean i ronfle et ε i espère ε i dormir John i snores and ε i hopes ε i to sleep
Control Verb and MCTAG
Regarding the control verb phenomenon, an LTAG analysis was proposed by Seddah and Gaiffe (2005) 7 involving a complex parsing device, the so-called argumental fusion, and a lexicon based information structure, the control canvas, stating which argument is controlled by the verb (e.g. subject for to 6 We assume a non-VP coordination analysis of (3). 7 The pure LTAG analysis of French control verbs was initially proposed by Abeillé (1998). hope and object for to forbid). The idea there was to view control verbs as capable of transferring their controlled argument to the trees in which they adjoin by the means of partial derivations, allowing for the creation of a pseudo-derivation between the argument of the control verb tree (i.e. Control Tree) and the embedded verb. This pseudo-derivation accounts for the fact that a syntactic argument of the embedded verb is not realized whereas its morphological features are actually transfered from the Control Tree substitution node through percolation of its feature structure, 8 thus making the underlying unrealized derivation explicit. 9 Figure 6 gives an overview of the process leading to a derivation graph. (Seddah and Gaiffe, 2005) This analysis can be rephrased in our framework by associating the control tree with a single node sharing a derivation with the node controlled by the verb, as illustrated in Figure 7 β Note that similarly to the initial LTAG implementation discussed above, where the argumental fusion could only occur on the node of a tree where the control tree was to adjoin, it is necessary to restrict the substitution of the control verb MC set's single node in the same way. In other words, to avoid overgeneration, in the case of chain of controls ( e.g., John hopes to forbid Mary to sleep), the derivations of a control verb MC set's trees must be tree local. 10
-espèrer S ✟ ✟ ✟ ✟ ❍ ❍ ❍ ❍ N0↓ V espère S* N0
Control Verb and Coordination
Until now, we have assumed that only initial trees anchored by verbs could be described in an MC-Set together with their ghost trees. Therefore, there is no way to create derivation links between different MC-Sets for providing an elegant analysis of (3) while re-maining in TL-MCTAG-LSD. Nevertheless, nothing prevents us from allowing nominal trees to be characterized in the same way. This allows a (lexically) anchored tree to substitute into a tree of a given MC-Set while one of its ghost trees substitutes into another tree from a different tree set. Thus, it becomes possible to substitute a tree anchored by Jean into the tree anchored by dormir, while its unrealized counterpart will substitute into the argument node of the control verb, therefore allowing the derivation tree displayed in Figure 5a. As one tree is derived into one MC-Set and its ghost tree into another, this analysis falls beyond TL-MCTAG, and benefits from the larger expressivity of NL-MCTAGs. It shall be noted that having an unrestricted potential number of unrealized ghost trees inside a nominal MC-Set means that a substitution of such a ghost tree can occur in lieu of a shared derivation, thus allowing coindexations of derivation nodes instead of sharing (cf. Figure 5b). This potential source of ambiguity could be circumvented by stating precedence rules between shared derivations and ghost derivations (i.e. derivation of ghost trees). Nevertheless, such an ambiguity is precisely what is needed to provide an analysis of argument cluster coordination in our framework, as we shall now demonstrate.
Argument cluster coordination
Assuming an ellipsis analysis for argument cluster coordination (ACC; (Beavers and Sag, 2004)), sentences such as (4) can be simply analyzed as a case of gapping plus a right subject elision in our framework. This requires an MC-Set α-donner which includes a tree anchored by donner/give and its ghost tree, as depicted in Figure 8. However, let us assume an analysis involving a right subject elision and a gapping of the main verb. Then, using the extension of our framework that we defined for handling unbounded ellipsis (section 4), the subject of ε j can be obtained in two different ways: (i) via a local shared derivation as sketched in the previous sections (no ghost tree is needed in the MC-Set α-Jean, which contains one unique tree); or (ii) as a ghost tree that belongs to the MC-Set α-Jean.
Note that if we follow Weir's (1988) original definition of MCTAG derivation, both ways to obtain the subject lead to the same derivation structure. Our own model implies that derivation steps with LSD or involving ghost trees will lead to different structures. This comes from the fact that our model is based on Kallmeyer's per-tree interpretation of MC-TAG derivation.
More precisely, Weir's definition of MCTAG derivation always implies a sharing, whereas Kallmeyer's own definition leads to two different, possibly co-indexed, nodes. These two possible interpretations of derivation can handle the difference between (i) an elided anchor that refers to the same individual or event as the anchor of the lexicalized tree in the same MC-Set (as Jean in (4)) and (ii) an elided anchor that refers to another (co-indexed) instance of the same class of individuals, or events, (as fleur/flower in (5)).
ε i ε j une ε k rouge à Paul ε i ε j a red (one) k to Paul
Therefore, what we need is a mechanism that can determine whether a given MC-Set denotes a unique event or individual, the latter corresponding to the sharing case or a list of events or individuals that are instances of the same class of events or individuals. Such a mechanism requires more than just syntactic information, typically it needs to rely on an adequate type system. Let us consider again example (5). Whatever the interpretation of the derivation operations, the derivation runs as follows. Nominal MC-sets α-fleur and α-Jean include ghost trees, whereas the auxiliary trees β-bleu and β-rouge have no ghost trees. 11 The auxiliary tree in β-bleu adjoins to the non-ghost tree in α-fleur while the one in β-rouge adjoins to the ghost tree in α-fleur. The determiners are treated in the same way. Next, the tree based on the nonghost tree in α-fleur substitutes in the non-ghost tree in α-donner, whereas the other tree substitutes in the ghost tree in α-donner. 12 The gapping and right subject elision are then handled as in Section 2. Now, let us suppose that we associate the MC-Set α-Jean with a type <e> and the MC-Set αfleur with type <e, t>. Let us postulate that we use Kallmeyer's per-tree interpretation for MC-Sets with type <e, t> and Weir's interpretation for MC-Sets with type <e>, the resulting derivation structure would be exactly the expected predicateargument structure as shown in Figure 9b and will only require the expressive power of Set Local MC-TAGs.
To show how such a structure could be generated, we assumed a rather naive syntax-semantics interface where all elements of a nominal MC-set have the same scope, regardless of their semantic types. That is, as pointed out by an anonymous reviewer, if an NP is right-node-raised, or undergoes a right-subject elision, 13 we can have an NP with type <e, t> that leads to a wide scope reading which would imply a single node in the derivation tree. In fact, should we want to distinguish between narrow and wide scope readings, we would need a richer model that could infer scope information from all trees of a MC-set. It would be very interesting to see how a model à la Kallmeyer and Joshi (2003) could be integrated in our framework. In fact, the idea of adding another type of node carrying scope information through the derivation structure seems natural considering the nature of our proposal.
α-donner a) S ✏ ✏ ✏ ✏ ✏ ✏ ❅ ❅ N0↓ V anchor [donner] N1↓ PP ✟ ✟ ❍ ❍ à N2↓ b) S c ✏ ✏ ✏ ✏ ✏ ✏ ❅ ❅ N0↓ V anchor ε N1↓ PP ✟ ✟ ❍ ❍ à N2↓ * A
Discussion
If syntactic and semantic structures were tied by a strict isomorphism, the TAG derivation tree, with its strict encoding of subcategorized arguments, could have been considered as a proper predicateargument structure. Unfortunately, due to a lack of expressive power, most of the complicated cases of mismatch between syntax and semantics cannot be formalized without breaking the elegance of TAGs' main property, namely that dealing with elementary trees means dealing with partial dependency structures. Over the last fifteen years, solving this problem has mobilized many teams, and, as noted by (Nesson and Shieber, 2006), led to the emergence of two schools. One focusing on giving more expressive power to the formalism in order to ease either a tight integration between the logical and the syntactic layers (Kallmeyer and Joshi, 1999;Gardent and Kallmeyer, 2003) or a capacity to handle, for instance, free word order languages (Lichte, 2007). The other school focuses either on keeping the syntactic TAG backbone as pure as possible, by designing a new derivation operation to handle coordination (Sarkar and Joshi, 1996) or on carefully designing a syntax-semantic interface built upon TAG derivations (Shieber and Schabes, 1990;Shieber and Nesson, 2007). Our proposal stands in between as we acknowledge that pure TAGs are not powerful enough to carry on simple analysis of complex phenomena while bringing the derivation tree closer to a predicate-argument structure. Recent proposals in the synchronous TAG framework share the same concern. In fact, Shieber and Nesson (2007) use Vector MCTAG (Rambow, 1994), for its ability to underspecify dominance relations and provide the synchronized logical layer with a derivation structure suitable for the analysis of control verbs. However, as we have shown, our solution for control requires a generalization of the mechanism designed for handling elliptic coordination that needs the expressive power of Non Local MCTAGs and tight integration of our proposal with a syntax-semantic interface. This raises two open questions: What generative power do we really need to build appropriate derivation structures? More importantly, where do we want syntax to stop?
Conclusion
We have shown how to extend an MCTAG account of coordination with a simple mechanism added on top of its extended domain of locality and which enables the handling of more complex constructions involving control verbs and elliptic coordinations. We have also shown how argument cluster coordinations could be treated in our framework without any special treatment besides the inclusion of a small type inference system if one wants to provide a proper dependency structure. Our work also shows that our treatment of such coordinate constructions needs the expressive power of Non Local MCTAGs to cope with unbounded ellipsis and Set Local MC-TAGs for ACC.
Figure 3 :
3Factorized α-et with n conjuncts Control Verb, Argument Cluster Coordination and Multi Component TAG
Figure 4 :
4Factorized MC-Set with Local SDs. Constraints are not displayed.
Figure 6 :
6Overview of control verb analysis,
Figure 5 :
5MCTAG-LSD derivation for "Jean ronfle et espère dormir" (John snores and hopes to sleep) For the sake of legibility, anchoring derivations of verbal trees are not displayed in this figure.
Figure 7 :
7MC-Set for control verb espérer (to hope) and derivation tree for 3.
( 4 )
4Jean i donne j une fleur à Marie et ε i ε j une bague à Paul John gives Mary a flower and Paul, a ring
( 5 )
5Jean i donne j une fleur k bleue à Marie et John i gives j a blue flower k to Mary and
Figure 8 :
8MC-Set α-donner (Constraints on links are defined as follows: {(A, {B|C})})
Figure 9 :
9Sketch of derivations for Argument Cluster Coordination of sentence 5 (John i gives j a blue flower k to Mary and ε i ε j a red (one) k to Paul) For the sake of readability, local shared derivations (from (a)N0 to (b)N0 and (b)N1 to (a)N1) are not displayed in this figure.
Timm Lichte and LauraKallmeyer. 2010. Gapping through TAG derivation trees. In Proceedings of The 10th International Conference on Tree Adjoining Grammars and Related Formalisms (TAG+10), Yale, USA, June. Timm Lichte. 2007. An MCTAG with Tuples for Coherent Constructions in German. In Proceedings of the 12th Conference on Formal Grammar 2007., Dublin, Ireland. John T. Maxwell, III and Christopher D. Manning. 1996. A theory of non-constituent coordination based on finite-state rules. In Miriam Butt and Tracy Holloway King, editors, On-line Proceedings of the LFG96 Conference. David Milward. 1994. Non-constituent coordination: Theory and practice. In Proceedings of the 15th international conference on Computational linguistics (COLING94), volume 2, pages 935-941. François Mouret. 2006. A phrase structure approach to argument cluster coordination. In Stefan Müller, editor, Proceedings of the 13th International Conference on Head-Driven Phrase Structure Grammar, pages 247-267, Stanford. CSLI Publications. Seddah, Benoit Sagot, Laurence DanlosDjamé
The first and second authors gratefully acknowledge the support of the ANR SEQUOIA (ANR-08-EMER-013). We thank Pierre Boullier, Éric de La Clergerie, Timm Lichte, Grzegorz Chrupala and our anonymous reviewers for their comments. All remaining errors would be ours.
See(Abeillé, 2006; Mouret, 2006) for discussions about this assumption.
Another possibility would be to merge NR with NL, as for example in(Sarkar and Joshi, 1996). However, this leads to derived DAGs instead of trees.
This mismatch between the derivations underlying a derived structure and the real derivation structure is also noted byKallmeyer (2002) for quantifier and verb interrelations.
Thanks to Timm Lichte for bringing this case to our attention.
Allowing unlimited adjunction of ghost auxiliary trees would lead to many spurious ambiguities, whereas having modal verbs or adverbs together with their ghost trees in a MC set would certainly be a step toward an elegant treatment of elided modifiers.12 To avoid spurious ambiguities when ghost trees are substituted, Local Shared Derivations could be used to check that the right ghost tree has been derived wrt to its antecedent. 13 e.g., [Someone from NY]i seems to have won the cup and εi is likely to win the lottery.
Verbes "à monté" et auxiliaires dans Control Verb, Argument Cluster Coordination and Multi Component TAG une grammaire d'arbres adjoints. Anne Abeillé, LINX. 39Anne Abeillé. 1998. Verbes "à monté" et auxiliaires dans Control Verb, Argument Cluster Coordination and Multi Component TAG une grammaire d'arbres adjoints. LINX, (39):119- 158.
In defense of lexical coordination. Anne Abeillé, Empirical Issues in Formal Syntax and Semantics 6. CSSP online Proceedings. P.Cabredo O.BonamiAnne Abeillé. 2006. In defense of lexical coordination. In P.Cabredo O.Bonami, editor, Empirical Issues in Formal Syntax and Semantics 6. CSSP online Proceed- ings.
Coordinate ellipsis and apparent non-constituent coordination. John Beavers, Ivan A Sag, Proceedings of the HPSG04 Conference. the HPSG04 ConferenceJohn Beavers and Ivan A. Sag. 2004. Coordinate ellip- sis and apparent non-constituent coordination. In Pro- ceedings of the HPSG04 Conference, pages 48-69.
Ellipsis and higher-order unification. Mary Dalrymple, Stuart M Shieber, Fernando C N Pereira, Linguistics and Philosophy. 144Mary Dalrymple, Stuart M. Shieber, and Fernando C. N. Pereira. 1991. Ellipsis and higher-order unification. Linguistics and Philosophy, 14(4):399-452.
Semantic construction in feature-based tag. Claire Gardent, Laura Kallmeyer, EACL '03: Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics. Budapest, HungaryAssociation for Computational LinguisticsClaire Gardent and Laura Kallmeyer. 2003. Semantic construction in feature-based tag. In EACL '03: Pro- ceedings of the tenth conference on European chap- ter of the Association for Computational Linguistics, pages 123-130, Budapest, Hungary. Association for Computational Linguistics.
Factoring predicate argument and scope semantics: Underspecified semantics with LTAG. Laura Kallmeyer, Aravind Joshi, Proceedings of the 12th. the 12thLaura Kallmeyer and Aravind Joshi. 1999. Factoring predicate argument and scope semantics: Underspeci- fied semantics with LTAG. In Proceedings of the 12th
. Amsterdam Colloquium, Amsterdam Colloquium, December.
Factoring Predicate Argument and Scope Semantics: Underspecified Semantics with LTAG. L Kallmeyer, A Joshi, Research on Language & Computation. 11L. Kallmeyer and A. Joshi. 2003. Factoring Predicate Argument and Scope Semantics: Underspecified Se- mantics with LTAG. Research on Language & Com- putation, 1(1):3-58.
Enriching the tag derivation tree for semantics. Laura Kallmeyer, Proceedings of KONVENS 2002. KONVENS 2002Laura Kallmeyer. 2002. Enriching the tag derivation tree for semantics. Proceedings of KONVENS 2002, pages 67-74.
A declarative characterization of a declarative characterization of multicomponent tree adjoining grammars. Laura Kallmeyer, Proceedings of Traitement automatique des langues Naturelles -TALN'05. Traitement automatique des langues Naturelles -TALN'05Dourdan, FranceLaura Kallmeyer. 2005. A declarative characterization of a declarative characterization of multicomponent tree adjoining grammars. In Proceedings of Traite- ment automatique des langues Naturelles -TALN'05, Dourdan, France.
Simpler tag semantics through synchronization. Rebecca Nesson, Stuart M Shieber, Proceedings of the 11th Conference on Formal Grammar. the 11th Conference on Formal GrammarMalaga, SpainRebecca Nesson and Stuart M. Shieber. 2006. Sim- pler tag semantics through synchronization. In Pro- ceedings of the 11th Conference on Formal Grammar, Malaga, Spain, pages 29-30.
Formal and Computational Aspects of Natural Language Syntax. Owen Rambow, University of PennsylvaniaPh.D. thesisOwen Rambow. 1994. Formal and Computational As- pects of Natural Language Syntax. Ph.D. thesis, Uni- versity of Pennsylvania.
Handling coordination in a tree adjoining grammar. Anook Sarkar, Aravind K Joshi, Philadelphia, PADept. of Computer and Info. Sc., Univ. of PennsylvaniaTechnical reportAnook Sarkar and Aravind K. Joshi. 1996. Handling coordination in a tree adjoining grammar. Technical report, Dept. of Computer and Info. Sc., Univ. of Penn- sylvania, Philadelphia, PA.
How to build argumental graphs using TAG shared forest: a view from control verbs problematic. Djamé Seddah, Bertrand Gaiffe, Proceedings of the 5th International Conference on the Logical Aspect of Computional Linguistic -LACL'05. the 5th International Conference on the Logical Aspect of Computional Linguistic -LACL'05Bordeaux, FranceDjamé Seddah and Bertrand Gaiffe. 2005. How to build argumental graphs using TAG shared forest: a view from control verbs problematic. In Proceedings of the 5th International Conference on the Logical Aspect of Computional Linguistic -LACL'05, Bordeaux, France, Apr.
Modeling and analysis of elliptic coordination by dynamic exploitation of derivation forests in LTAG parsing. Djamé Seddah, Benoît Sagot, Proceedings of TAG+8. TAG+8Sydney, AustraliaDjamé Seddah and Benoît Sagot. 2006. Modeling and analysis of elliptic coordination by dynamic exploita- tion of derivation forests in LTAG parsing. In Proceed- ings of TAG+8, Sydney, Australia.
The use of MCTAG to Process Elliptic Coordination. Djamé Seddah, Proceeding of the Ninth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+9). eeding of the Ninth International Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+9)Tuebingen, GermanyDjamé Seddah. 2008. The use of MCTAG to Process El- liptic Coordination. In Proceeding of the Ninth Inter- national Workshop on Tree Adjoining Grammars and Related Formalisms (TAG+9), Tuebingen, Germany, June.
Extraction phenomena in synchronous tag syntax and semantics. M Stuart, Rebecca Shieber, Nesson, Proceedings of the Workshop on Syntax and Structure in Statistical Translation. the Workshop on Syntax and Structure in Statistical TranslationRochester, New York26Stuart M. Shieber and Rebecca Nesson. 2007. Extraction phenomena in synchronous tag syntax and semantics. In Proceedings of the Workshop on Syntax and Struc- ture in Statistical Translation, Rochester, New York, volume 26.
Synchronous Tree Adjoining Grammars. Stuart Shieber, Yves Schabes, COLING. Helsinki3Stuart Shieber and Yves Schabes. 1990. Synchronous Tree Adjoining Grammars. In COLING, volume 3, pages 253-260, Helsinki.
The Syntactic Process. J Mark, Steedman, The MIT PressCambridge, MAMark J. Steedman. 2001. The Syntactic Process. The MIT Press, Cambridge, MA.
From metagrammars to factorized TAG/TIG parsers. Éric Villemonte De La Clergerie, Proceedings of the Fifth International Workshop on Parsing Technology (IWPT'05). the Fifth International Workshop on Parsing Technology (IWPT'05)Vancouver, CanadaÉric Villemonte de La Clergerie. 2005. From meta- grammars to factorized TAG/TIG parsers. In Pro- ceedings of the Fifth International Workshop on Pars- ing Technology (IWPT'05), pages 190-191, Vancou- ver, Canada, October.
Characterizing mildly contextsensitive grammar formalisms. J David, P A Weir ; Philadelphia, Usa Supervisor-Aravind, K Joshi, Ph.D. thesisDavid J. Weir. 1988. Characterizing mildly context- sensitive grammar formalisms. Ph.D. thesis, Philadel- phia, PA, USA. Supervisor-Aravind K. Joshi. |
259,376,476 | Aristoxenus at SemEval-2023 Task 4: A Domain-Adapted Ensemble Approach to the Identification of Human Values behind Arguments | This paper presents our system for the SemEval-2023 Task 4, which aims to identify human values behind arguments by classifying whether or not an argument draws on a specific category. Our approach leverages a second-phase pre-training method to adapt a RoBERTa Language Model (LM) and tackles the problem using a One-Versus-All strategy. Final predictions are determined by a majority voting module that combines the outputs of an ensemble of three sets of per-label models. We conducted experiments to evaluate the impact of different pre-trained LMs on the task, comparing their performance in both pre-trained and task-adapted settings. Our findings show that fine-tuning the RoBERTa LM on the taskspecific dataset improves its performance, outperforming the best-performing baseline BERT approach. Overall, our approach achieved a macro-F1 score of 0.47 on the official test set, demonstrating its potential in identifying human values behind arguments. | [
248094722,
201058550
] | Aristoxenus at SemEval-2023 Task 4: A Domain-Adapted Ensemble Approach to the Identification of Human Values behind Arguments
July 13-14, 2023
Dimitrios Zaikis dimitriz@csd.auth.gr
School of Informatics
School of Informatics
Aristotle University of Thessaloniki
Aristotle University of Thessaloniki
Stefanos D Stefanidis stdistefanidis@gmail.com
School of Informatics
School of Informatics
Aristotle University of Thessaloniki
Aristotle University of Thessaloniki
Ioannis Vlahavas vlahavas@csd.auth.gr
School of Informatics
School of Informatics
Aristotle University of Thessaloniki
Aristotle University of Thessaloniki
Aristoxenus at SemEval-2023 Task 4: A Domain-Adapted Ensemble Approach to the Identification of Human Values behind Arguments
Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)
the The 17th International Workshop on Semantic Evaluation (SemEval-2023)July 13-14, 2023Konstantinos Anagnostopoulos Independent researcher
This paper presents our system for the SemEval-2023 Task 4, which aims to identify human values behind arguments by classifying whether or not an argument draws on a specific category. Our approach leverages a second-phase pre-training method to adapt a RoBERTa Language Model (LM) and tackles the problem using a One-Versus-All strategy. Final predictions are determined by a majority voting module that combines the outputs of an ensemble of three sets of per-label models. We conducted experiments to evaluate the impact of different pre-trained LMs on the task, comparing their performance in both pre-trained and task-adapted settings. Our findings show that fine-tuning the RoBERTa LM on the taskspecific dataset improves its performance, outperforming the best-performing baseline BERT approach. Overall, our approach achieved a macro-F1 score of 0.47 on the official test set, demonstrating its potential in identifying human values behind arguments.
Introduction
The ValueEval task aims to develop a system for automatically detecting the values expressed in natural language arguments within English texts (Kiesel et al., 2023). Identifying human values is critical for gaining insights into people's behavior, evaluating content, personalizing experiences, and resolving conflicts. Analyzing the values expressed in language, including beliefs, attitudes, and motivations, can help us understand the quality and relevance of content and its potential impact (Kiesel et al., 2022). Moreover, identifying individual values can be useful in conflict resolution by enabling us to comprehend the underlying beliefs and motivations of opposing viewpoints. This can facilitate finding common ground and working towards a resolution that is acceptable to all. There-fore, identifying human values has the potential to play a significant role in various fields, including psychology, sociology, marketing, and others dealing with human behavior and communication.
In this paper, we propose a Transformer-based Language Model (LM) system for the ValueEval task, which utilizes second-phase pre-training in an One-Versus-All (OVA) setting to identify the human values expressed in arguments. Our approach combines both data and algorithm adaptation concepts, whereby we second-phase pre-train an LM to better adapt to the domain and transform the data to better represent the task. To align with the nature of the task and the dataset (Mirzakhmedova et al., 2023), we implement a form of prompt engineering. This involves transforming the premise, stance, and conclusion inputs into a single sentence while replacing the stance with a predefined template. Moreover, we task-adapt the RoBERTa LM by aligning it with the masked language-modeling objective to predict the probability of each stance given an argument and conclusion. To improve the model's performance, we train multiple models for each label, based on different hyperparameters and versions of the dataset that are sampled differently. Finally, we use majority voting to form the final predictions.
In addition to the system description presented in this paper, we make the following observations based on our approach and experiments: Firstly, we observed that second-phase pre-training in the form of task-adaptation allows the underlying LM to better represent the task in the embedding space. This leads to improved performance in identifying the human values expressed in arguments. Secondly, we found that utilizing an OVA approach, also known as One-Versus-Rest, dramatically improves performance compared to using a single multi-label classifier. Finally, we observed that the effectiveness of data sampling techniques varied per label, with a subset of per-label models performing better without it. This highlights the importance of experimenting with different techniques to find the optimal approach for each label.
Background
While human values have long been an important consideration in formal argumentation, this task represents the first attempt to computationally identify the values behind arguments. To that end, Kiesel et al. (2022) presented the first dataset containing the conclusion, the premise's stance towards the conclusion, and the premise itself, as show in the example in Table 1.
Argument ID A01010 Conclusion
We should prohibit school prayer Stance against Premise it should be allowed if the student wants to pray as long as it is not interfering with his classes The task involves determining whether a given textual argument relates to a specific category from a set of 20 value categories of human values derived from the social science literature. The baseline approaches for this multi-label classification problem include a "1-Baseline" where the positive label is assigned to all instances, a label-wise "SVM" and a Transformer-based approach, called "BERT".
One of the main advantages of Transformerbased LM approaches is their ability to capture complex linguistic structures and dependencies, which can be difficult to model using traditional approaches. In general, LM models can learn to understand context, ambiguity, and figurative language, which are all important aspects in argumentation mining, which is reflected by the published results as well, where "BERT" significantly outperforms the other approaches. This approach utilizes the BERT language model (Devlin et al., 2018) that uses stacked Transformer-based encoders (Vaswani et al., 2017), pre-trained on a large corpus of text data.
RoBERTa (Liu et al., 2019) is a variation of the BERT LM that was designed to improve upon some of its limitations, using a similar architecture, pretrained on a larger and more diverse corpus of text data, with longer sequences and fewer masking tasks. This approach is intended to help RoBERTa capture more complex linguistic patterns and relationships than BERT. RoBERTa also uses a different pre-training objective, which involves training the model to predict the correct order of sentences in a document. This allows the model to better understand the relationships between different sentences in a document and to capture a wider range of linguistic knowledge.
By fine-tuning the pre-trained RoBERTa model on a specific task, the model can be optimized to better handle the specific requirements of that task and can result in improved performance. Accordingly, second-phase pre-training (Gururangan et al., 2020) can further improve an LM's performance with domain or task-adaptive pre-training that allows the model to learn task-specific features and patterns that are not captured by the general language model. The transfer of knowledge (transfer learning) allows a pre-trained LM to adapt to a specific task with less labeled training data and build upon the wide range of linguistic patterns and relationships previously learned.
System Overview
In this section, we describe our proposed Transformer-based system for the identification of human values behind argument in detail, where, given a conclusion, a stance and a premise, the input is classified into one of the 20 pre-defined values categories. Our system consists of an LM adaption (TAPT pre-train), data transformation, OVA training and tuning phase, as shown in Figure 1. Figure 1: Overview of our proposed ensemble system with second-phase pre-training (task-adaption) and majority voting.
Language model alignment (Task adaption
Task adaptation refers to the process of second phase pre-training a LM with domain-specific unlabeled data that can potentially lead to performance improvements in that specific topic or domain (Gururangan et al., 2020). Towards that end, we implemented a slightly different approach where instead of aligning the LM on the task dataset, we trained the underlying language model on a different task.
BERT-based models are trained using two types of sentences, Sentence A and Sentence B with the first being a sentence of a given input sequence, while the latter is a second sentence. This approach is commonly used in question answering or text classification tasks, where the input consists of a pair of sentences, where sentence A is a question or a prompt, and sentence B is the text to be classified or used to answer the question. In some cases, Sentence B might be the next sentence that follows Sentence A, but in other cases, it might be a sentence that is randomly chosen from the same document as Sentence A. During the pre-training phase, BERT-based models are trained to learn a joint representation of both Sentence A and Sentence B using either Masked Language Modeling (MLM) or Next Sentence Prediction (NSP) tasks. In the MLM task, the model is trained to predict the masked words in Sentence A, while in the NSP task, the model is trained to predict whether Sentence B follows Sentence A in the original text or not.
We followed a supervised training approach to task adaption by training the model for the classification of the stance using the premise as Sentence A and the conclusion as Sentence B. Based on the argument presented in Table 1, the premise, "It should be allowed if the student wants to pray as long as it is not interfering with his classes" is used as Sentence A, while the conclusion, "We should prohibit school prayer" is used as Sentence B. By training the LM on this type of input, it can learn to classify the stance of a given text based on the relationship between the premise and conclusion. In this example, the model should predict that the stance is against school prayer since the premise argues for allowing it, while the conclusion argues for prohibiting it. By training on both Sentence A and Sentence B, BERT-based models can learn to understand the relationship between different sentences in a given text and capture the contextual meaning of the input sequence.
Per Label Task adapted models
Our proposed system is built on the task-adapted RoBERTa base language model, where we train separate binary classification models for each label. We use an OVA approach, where each model is trained to differentiate between instances of one class and instances of all other classes combined, effectively identifying instances of its corresponding label, while ignoring instances of all other labels.
To form the final input sentence string, we implement prompt engineering by concatenating the conclusion C to the premise P , using a connecting phrase that reflects the stance St and connects the premise and conclusion. Specifically, we use the format S = C + R St + P , where R St represents the appropriate connecting phrase. This process enables the model to take into account the relationship between the premise and conclusion, and the stance expressed in the connecting phrase.
After forming the input sentence, it is passed through the model's transformer layers, which takes in the tokenized sentence and the attention mask. The resulting output is then processed using either mean pooling or the model's pooled output and passed through a classification layer to produce the final per label binary output prediction.
Ensemble Module with Majority Voting
Given the total number of labels N L = 20, we trained and tuned two distinct models for each label, resulting in a total of 40 models. We then grouped these models into three sets of OVA classifiers. Each set followed the architecture described in Section 3.2, and was individually hyperparametertuned, but trained on a different subset of the task dataset. One set was trained on the original dataset without any sampling, one set was trained on a down-sampled dataset, and the final set consisted of the best-performing models from the first two sets, which could either be non-sampled or downsampled models.
During inference, we used each binary classifier corresponding to each class to predict the probability that a given sample belongs to that class. To generate the final prediction, we employed a majority voting approach. Specifically, we assigned a binary label based on a fine-tuned threshold, and the final label was determined by the majority vote among the three sets of models.
Experimental Setup
Dataset and Evaluation Methodology
We transformed the dataset for two different approaches by replacing the stance with a connecting phrase and by balancing the dataset by downsampling to the majority label instances for a per label (OVA) approach. By replacing the stance from a set of phrases, as shown in Table 2, we concatenate conclusion and premise sequences with a randomly selected connecting phrase. For example, the argument presented in Table 1 would become "We should prohibit school prayer so it is wrong to say that it should be allowed if the student wants to pray as long as it is not interfering with his classes". This process allows the model to learn from a continuous context and learn semantically relevant representations that take the complete argument into account. On the other hand, transforming a dataset for OVA involves converting the original multi-class labels into binary labels to create a set of binary labeled datasets, each corresponding to a single class.
Label Phrases
against "so it is not valid to say that" "so it is wrong that" in favor of "so" "thus" "therefore" ". Subsequently" ". As a result" ". So it is valid to say that" ", so it is true that" The dataset had pre-defined train, validation and test splits, with labels for both the training and development sets. We created a development set from the train dataset by splitting it into 80% for training and 20% for development and used the provided validation set as test set. We follow the tasks evaluation strategy using the label-wise F1score and its means over all labels (macro-averaged F1), which is the harmonic mean of the Precision and Recall metrics, applying the same weight to all classes.
Training
We trained each model on a single label and tuned the hyperparameters using the Optuna library (Akiba et al., 2019) using the search space as shown in Table 3 and trained for 100 epochs with an early stopping patience of 20. The best hyperparameters for each per label model are shown in the Appendix A.1 (Table 6). We trained the models without sampling and with down-sampling to create the two sets of OVA classifiers and used the best performing model per label as to create the third set. Additionally, we experimented with different pre-trained language models, such as BERT (Devlin et al., 2018), Al-BERT (Lan et al., 2019), MPnet (Yee et al., 2019), XLnet (Yang et al., 2019) and DistilBERT, Distil-RoBERTa (Sanh et al., 2019), both base and the large variant of RoBERTa (Liu et al., 2019). We task-adapted these models and trained them on the downstream task using the best per label hyperparameters and compared their average performance on the labeled validation set.
We implemented our described system with the Python programming language (3.8.16) and the PyTorch (1.10.2) and Transformers (4.23.1) libraries on a single computer with a 24-core Intel CPU and two Nvidia RTX A6000 graphics cards.
The code is available at: https://github.com/d1mitriz/aristoxenus-semeval23-task4/
Results
This section describes the overall results compared to the best approach and our experimental results that led to our proposed system.
Overall results
In the ValueEval task, 40 teams submitted a total of 182 entries, including those from the organizers. Table 4 shows the official results for our system, as well as the baselines (1-Baseline and BERT), the top-performing approach, and the best-performing systems for each category. Our system achieved a macro-F1 score of 0.47 on the official test set, out-performing both baselines and the BERT approach in 19 out of 20 value categories.
Despite our system's strong performance compared to the baselines, it fell short compared to the best approach. Nevertheless, our approach demonstrates that an ensemble approach utilizing different training regimens using an OVA strategy can improve over a single multi-label classifier.
Experimental results
Initially, we developed a single multi-label classifier that could predict all value categories using a single architecture and classification head. Our goal was to explore how the model could leverage the inter-dependencies between the categories and the nature of the data. However, this approach resulted in the lowest performance among our experiments.
Despite our initial expectations, we found that the single classifier struggled to capture the subtle differences between the value categories and the complex relationships between them. Additionally, the relatively large number of categories and the imbalanced distribution of the data made it challenging for the model to learn meaningful representations for each category. As a result, we decided to explore alternative approaches, such as using separate classifiers for each category and incorporating additional features to improve the model's performance.
To address the limitations of the single classifier approach, we decided to split the responsibility across 20 models, each focused on predicting a single label using an One-Versus-All strategy with a majority vote system. The foundation of our approach was the underlying LM that generated semantically and contextually relevant embeddings for the input data.
To determine the best LM for this task and setting, we experimented with various base and large versions and evaluated their adaptation capabilities on the evaluation set. Table 5 summarizes the results of the LM experiments. Overall, we found that the base version of RoBERTa achieved the best results in terms of macro-F1 score.
Furthermore, we observed that fine-tuning the LM to the task-specific data improved its performance, suggesting that the LM could effectively learn to represent the unique features and nuances of this task. These findings informed our final system, which incorporated a second phase pre-trained RoBERTa-based model fine-tuned on the ValueEval dataset and achieved competitive results in the task.
Conclusion
In this paper, we describe our Transformer-based Language Model system for the ValueEval task, which utilizes second-phase pre-training in an One-Versus-All (OVA) setting to identify the human values expressed in arguments. We task-adapt the RoBERTa LM to the domain by training the model to predict the stance that connects the conclusion to the premise. Furthermore, we transform the input data to better capture the semantic and contextual information in a continuous way, by replacing the stance with a connecting phrase. Our system predicts based on a majority vote from predictions by an ensemble of three different sets of per label models. We show that the task-adaption improves on the systems performance, indicating the language models can learn to generate better embeddings by aligning them to this task. A possible direction for future work would be to investigate the impact of different language models as well as data transformation techniques on the systems predictive capabilities. Table 4: Achieved F 1 -score of team aristoxenus per test dataset, from macro-precision and macro-recall (All) and for each of the 20 value categories. Approaches marked with * were not part of the official evaluation. Approaches in gray are shown for comparison: an ensemble using the best participant approach for each individual category; the best participant approach; and the organizer's BERT and 1-Baseline.
Table 1 :
1Example that includes the conclusion, stance and premise of an argument.
Table 2 :
2Pool of phrases that would connect the conclusion with the premise depending on the label.
Table 3 :
3Hyperparameter search space for each binary classification model.
Test set / Approach All Main Best per category .59 .61 .71 .39 .39 .66 .50 .57 .39 .80 .68 .65 .61 .69 .39 .60 .43 .78 .87 .46 .58 Best approach .56 .57 .71 .32 .25 .66 .47 .53 .38 .76 .64 .63 .60 .65 .32 .57 .43 .73 .82 .46 .52 BERT .42 .44 .55 .05 .20 .56 .29 .44 .13 .74 .59 .43 .47 .23 .07 .46 .14 .67 .71 .32 .33 1-Baseline .26 .17 .40 .09 .03 .41 .13 .12 .12 .51 .40 .19 .31 .07 .09 .35 .19 .54 .17 .22 .46 Ours .47 .58 .66 .09 .25 .58 .07 .50 .29 .75 .61 .56 .51 .52 .27 .49 .20 .76 .77 .34 .40Self-direction: thought
Self-direction: action
Stimulation
Hedonism
Achievement
Power: dominance
Power: resources
Face
Security: personal
Security: societal
Tradition
Conformity: rules
Conformity: interpersonal
Humility
Benevolence: caring
Benevolence: dependability
Universalism: concern
Universalism: nature
Universalism: tolerance
Universalism: objectivity
Table 5 :
5Experimental results of the different pre-trained language models and their task-adapted counterparts. TAPT in the model names, denotes Task Adapted Pre-Trained and bolt the best result. All results are based on the averaged macro-F1 score on the validation dataset. values behind arguments. In Proceedings of the 17th International Workshop on Semantic Evaluation, Toronto, Canada. Association for Computational Linguistics.
A AppendixA.1 Hyperparameters
Optuna: A nextgeneration hyperparameter optimization framework. Takuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, Masanori Koyama, 10.1145/3292500.3330701Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19. the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19New York, NY, USAAssociation for Computing MachineryTakuya Akiba, Shotaro Sano, Toshihiko Yanase, Takeru Ohta, and Masanori Koyama. 2019. Optuna: A next- generation hyperparameter optimization framework. In Proceedings of the 25th ACM SIGKDD Interna- tional Conference on Knowledge Discovery & Data Mining, KDD '19, page 2623-2631, New York, NY, USA. Association for Computing Machinery.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
2020. Don't stop pretraining: Adapt language models to domains and tasks. Ana Suchin Gururangan, Swabha Marasović, Kyle Swayamdipta, Iz Lo, Doug Beltagy, Noah A Downey, Smith, 10.48550/ARXIV.2004.10964Suchin Gururangan, Ana Marasović, Swabha Swayamdipta, Kyle Lo, Iz Beltagy, Doug Downey, and Noah A. Smith. 2020. Don't stop pretraining: Adapt language models to domains and tasks.
Identifying the Human Values behind Arguments. Johannes Kiesel, Milad Alshomary, Nicolas Handke, Xiaoni Cai, Henning Wachsmuth, Benno Stein, 10.18653/v1/2022.acl-long.30660th Annual Meeting of the Association for Computational Linguistics (ACL 2022). Association for Computational LinguisticsJohannes Kiesel, Milad Alshomary, Nicolas Handke, Xiaoni Cai, Henning Wachsmuth, and Benno Stein. 2022. Identifying the Human Values behind Argu- ments. In 60th Annual Meeting of the Association for Computational Linguistics (ACL 2022), pages 4459- 4471. Association for Computational Linguistics.
Johannes Kiesel, Milad Alshomary, Nailia Mirzakhmedova, Maximilian Heinrich, Nicolas Handke, Henning Wachsmuth, and Benno Stein. 2023. Semeval-2023 task 4: Valueeval: Identification of human. Johannes Kiesel, Milad Alshomary, Nailia Mirzakhme- dova, Maximilian Heinrich, Nicolas Handke, Hen- ning Wachsmuth, and Benno Stein. 2023. Semeval- 2023 task 4: Valueeval: Identification of human
Albert: A lite bert for self-supervised learning of language representations. Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, Radu Soricut, 10.48550/ARXIV.1909.11942Zhenzhong Lan, Mingda Chen, Sebastian Goodman, Kevin Gimpel, Piyush Sharma, and Radu Soricut. 2019. Albert: A lite bert for self-supervised learning of language representations.
. Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, 10.48550/ARXIV.1907.11692Roberta: A robustly optimized bert pretraining approachYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach.
Nailia Mirzakhmedova, Johannes Kiesel, Milad Alshomary, Maximilian Heinrich, Nicolas Handke, Xiaoni Cai, Barriere Valentin, Doratossadat Dastgheib, Omid Ghahroodi, 10.48550/ARXIV.2301.13771abs/2301.13771Lea Kawaletz, Henning Wachsmuth, and Benno Stein. 2023. The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments. CoRR. Nailia Mirzakhmedova, Johannes Kiesel, Milad Al- shomary, Maximilian Heinrich, Nicolas Handke, Xi- aoni Cai, Barriere Valentin, Doratossadat Dastgheib, Omid Ghahroodi, Mohammad Ali Sadraei, Ehsaned- din Asgari, Lea Kawaletz, Henning Wachsmuth, and Benno Stein. 2023. The Touché23-ValueEval Dataset for Identifying Human Values behind Arguments. CoRR, abs/2301.13771.
Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, 10.48550/ARXIV.1910.01108Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter.
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Proceedings of the 31st International Conference on Neural Information Processing Systems, NIPS'17. the 31st International Conference on Neural Information Processing Systems, NIPS'17Red Hook, NY, USACurran Associates IncAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Proceedings of the 31st International Conference on Neural Information Processing Sys- tems, NIPS'17, page 6000-6010, Red Hook, NY, USA. Curran Associates Inc.
Xlnet: Generalized autoregressive pretraining for language understanding. Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Carbonell, Ruslan Salakhutdinov, V Quoc, Le, 10.48550/ARXIV.1906.08237Zhilin Yang, Zihang Dai, Yiming Yang, Jaime Car- bonell, Ruslan Salakhutdinov, and Quoc V. Le. 2019. Xlnet: Generalized autoregressive pretraining for lan- guage understanding.
Simple and effective noisy channel modeling for neural machine translation. Kyra Yee, Yann Dauphin, Michael Auli, Conference on Empirical Methods in Natural Language Processing. Kyra Yee, Yann Dauphin, and Michael Auli. 2019. Sim- ple and effective noisy channel modeling for neural machine translation. In Conference on Empirical Methods in Natural Language Processing.
Self-direction: thought 0.2346715251 1. Self-direction: thought 0.2346715251 1.7632e-06
Self-direction: action 0.2392132739 1. Self-direction: action 0.2392132739 1.9734e-06
Hedonism 0.2721909443 2. Hedonism 0.2721909443 2.4005e-06
Conformity: rules 0.2284257602 9. Conformity: rules 0.2284257602 9.5028e-06
Universalism: tolerance 0.2228968031 4. Universalism: tolerance 0.2228968031 4.3012e-06
Universalism: objectivity 0.2252762583 8. Universalism: objectivity 0.2252762583 8.3819e-06
Table 6: Results of optimal hyperparamaters obtained from tuning with Optuna for each label and model. Table 6: Results of optimal hyperparamaters obtained from tuning with Optuna for each label and model. |
259,376,546 | Alexa at SemEval-2023 Task 10: Ensemble Modeling of DeBERTa and BERT Variations for Identifying Sexist Text | This study presents an ensemble approach for detecting sexist text in the context of the Semeval-2023 task 10. Our approach leverages 18 models, including DeBERTa-v3-base models with different input sequence lengths, a BERT-based model trained on identifying hate speech, and three more models pre-trained on the task's unlabeled data with varying input lengths. The results of our framework on the development set show an f1-score of 84.92% and on the testing set 84.55%, effectively demonstrating the strength of the ensemble approach in getting accurate results. | [
227230776,
257405434,
218974440,
174798983
] | Alexa at SemEval-2023 Task 10: Ensemble Modeling of DeBERTa and BERT Variations for Identifying Sexist Text
July 13-14, 2023
Mutaz Younes mutazyounes@gmail.com
Dep. of Computer Science
Dep. of Computer Science
Maharishi International University Iowa
USA
Ali Kharabsheh alikharabsha12@gmail.com
Dep. of Computer Science
Yarmouk University Irbid
Jordan
Mohammad Bani Younes
Ajloun National University Irbid
Jordan
Alexa at SemEval-2023 Task 10: Ensemble Modeling of DeBERTa and BERT Variations for Identifying Sexist Text
Proceedings of the The 17th International Workshop on Semantic Evaluation (SemEval-2023)
the The 17th International Workshop on Semantic Evaluation (SemEval-2023)July 13-14, 2023
This study presents an ensemble approach for detecting sexist text in the context of the Semeval-2023 task 10. Our approach leverages 18 models, including DeBERTa-v3-base models with different input sequence lengths, a BERT-based model trained on identifying hate speech, and three more models pre-trained on the task's unlabeled data with varying input lengths. The results of our framework on the development set show an f1-score of 84.92% and on the testing set 84.55%, effectively demonstrating the strength of the ensemble approach in getting accurate results.
Introduction
Sexist language has been a persistent issue in various forms of communication, including online communication. The use of sexist language perpetuates gender stereotypes and reinforces the marginalization of certain groups, particularly women. Detecting and addressing instances of sexist language is crucial for promoting gender equality and reducing discrimination.
In recent years, there has been a growing interest in developing methods for automatically detecting sexist language. The Semeval-2023 task 10 provides (Kirk et al., 2023) a platform for researchers to explore and evaluate various approaches to identifying sexist text. This task involves identifying instances of sexist language in various forms of online communication, including social media posts, comments, and reviews.
Previous research has explored different methods for identifying sexist language, including rulebased approaches, machine-learning techniques, and deep-learning models. While these approaches have shown promising results, the challenge of detecting sexist language remains complex due to the subtleties and nuances of language use.
Our approach for identifying the sexist language in the Semeval-2023 task 10 subtask A involves an ensemble of 18 different models, including DeBERTa-v3-base and pre-trained BERT-based models pre-trained on the task's unlabeled data. The model's predictions are combined to improve overall performance. Our method achieved an f1score of 84.92% on the development dataset and 84.55% on the testing dataset, ranking 30th out of 89 participating teams.
Section 2 of this paper introduces an overview of the related work in the field of sexist text identification. Section 3 gives an overview of the details of the dataset we used in this research. Section 4 presents key characteristics of the ensemble approach and provides a detailed description of the models used. Finally, in Section 5, we conclude the paper and discuss future directions for research in this area.
Background
Sexism against women on social media has become a growing concern, leading researchers to develop automatic systems that detect sexist text. Online competitions such as the SemEval-2022 Task 5 on "Multimedia automatic misogyny identification" and "sEXism Identification in Social neTworks (EXIST)" at IberLEF 2021 (Fersini et al., 2022;Rodríguez-Sánchez et al., 2021) have helped accelerate this research. The competition aimed to identify sexism in social media content using machine learning, with two sub-tasks: binary classification and fine-grained classification distinguishing between five different types of sexist content.
19 teams participated in the EXIST 2021 benchmark and proposed different approaches to solve the task. The team UMUTeam (García-Díaz et al., 2021) combines linguistic features with state-ofthe-art transformers to achieve an accuracy of 76.47% for binary classification (task 1) and ranks seventh. For the multi-class classification (task 2) they achieve an accuracy of 67.67% ranking third. Another team (Butt et al., 2021), presents their results on the shared task and emphasizes the importance of pre-processing techniques and data augmentation in overcoming the challenges posed by a multilingual dataset and inconsistencies of social media text. Their work achieves an F1 score of 78.02% for binary classification (task 1) ranking ninth and 49.08% for fine-grained classification (task 2) ranking fifth.
The AIT_FHSTP team (Mina et al., 2021) applies two multilingual transformer models, multilingual BERT and XLM-R, using pre-training with additional data and supervised fine-tuning with augmented data. The best model in their work is the XLM-R with a macro F1-score of 77.52% for binary classification (task 1) ranking 13 and 55.89% for the multi-class classification (task 2) ranking seventh. (Kalra and Zubiaga, 2021) presents a deep neural network approach to detect sexist text, including BERT and DistilBERT. The best model for binary classification (task 1) uses BERT and a multi-filter CNN model achieving an accuracy of 76.2%. The same model with data augmentation achieves the best performance for multiclass classification with an F1 score of 51.9%
In earlier work, (Sharifirad et al., 2018) explore the use of knowledge graphs to improve the performance of sexist tweet classifiers. The authors propose using ConceptNet and Wikidata to improve sexist tweet classification by two methods: text augmentation and text generation. In the text generation approach, new tweets are generated by replacing words with data from ConceptNet relations, increasing the size of the training set and preserving the label. In the text augmentation approach, words in the tweets are augmented with words from ConceptNet relations and their descriptions from Wikidata, increasing the length of each tweet without changing the number of tweets in each class. The authors find that their approach significantly improved sexist tweet classification across multiple machine learning models and claim it can be applied to other small dataset problems, such as hate speech or abusive language and text classification.
DataSet
The Subtask A dataset used in this research consists of 20,000 entries sampled from Gab and Reddit. The training dataset, which constitutes 70% of the overall dataset, contains 14,000 entries, out of which 3,398 are labeled as sexist.
Dataset
Sentences Percentage Training 14000 70% Testing 4000 10% Development 2000 20% To enable semi-supervised training techniques, the organizers also released two unlabeled datasets consisting of 1 million entries each from Gab and Reddit. The unlabeled datasets were prepared using the same cleaning and preparation procedures as the labeled dataset.
The development data constitutes 10% of the overall dataset and consists of 2,000 entries. The test data, which constitutes 20% of the overall dataset, contains 4,000 entries.
The number of sentences in the training and development datasets is summarized in Table 1: 4 System overview Our system for identifying sexist language in the Semeval-2023 task 10 involves an ensemble approach that combines multiple transformers-based models (Vaswani et al., 2017), Section 4.2 provides details on these models. Specifically, we used a combination of different variations of DeBERTa-v3-base (He et al., 2020) and BERT (Devlin et al., 2018) models. The transformer models are finetuned on the provided labeled data and used to classify instances in the development and test sets. The code can be accessed at Code repository.
Data Preparation
Initially, we attempted to clean the training data by removing punctuations, converting text to lowercase, and removing extra spaces. However, this resulted in a decrease in performance on the development set, so we decided to keep the text as is. Additionally, we also explored the impact of data augmentation on the results. We tried various techniques such as using related data from previous tasks (Rodríguez-Sánchez et al., 2021) to augment the data. However, in general, data augmentation did not lead to a significant improvement in performance. The improvement in results came from ensembling the models rather than data augmentation.
Models Used
In this study, we explore different models to determine the optimal approach for a particular task. We utilize publicly available models and pre-train them on the unlabeled data provided by the task.
We explore a set of models such as RoBERTa models , BART models , DeBERTa models (He et al., 2020), DistilBERT models (Sanh et al., 2019), and pretrained BERT-based models. Based on the performance of each model on the development set, we eventually ended up choosing only deBERTa-v3-base and hateBERT (Caselli et al., 2020) and pre-trained them on the unlabeled data provided by the authors.
For both deBERTa-v3-base and hateBERT, we conducted a total of 18 fine-tuning experiments, each employing different combinations of learning rate and max length parameters. We explored various parameter values and ultimately selected those that yielded the highest scores, as shown in Table 2.
In our fine-tuning experiments with deBERTa-v3-base and hateBERT, the max length parameter proved to be an influential factor in determining the models' performance. The max length parameter specifies the maximum number of tokens the model processes in each input sequence. Adjusting this parameter allowed us to explore the trade-off between capturing sufficient context for accurate predictions and reducing the computational resources required.
Interestingly, our experiments revealed that sometimes shorter max length values resulted in better performance. This suggests that a more concise input sequence may provide the models with more focused and relevant context, enabling them to make more accurate predictions. On the other hand, longer max length values could introduce additional noise, potentially hindering the models' performance. We tested various max length values to identify the optimal setting for each model, as demonstrated in Table 2.
To provide a more detailed analysis, Table 2 displays the results of each model when tested individually on the development and test datasets. Several models, such as deBERTa-v3-base, achieve high f1 scores when used alone. Nevertheless, we observed a slight improvement in overall performance on the test set when combining all the predictions using a soft-ensemble approach, see section 4.3.
Ensemble Approach
We used a soft-ensemble method for combining the predictions of each model to produce the final prediction. The soft-ensemble method has been widely adopted in the field of natural language processing and enables the combination of probabilities, taking into account the confidence of each model's prediction. We also experiment with the hard ensemble method, which is another commonly used ensemble method. Hard ensemble involves selecting the most frequently predicted class from each model and using that as the final prediction. This method does not take into account the confidence of each model's prediction, therefore, we compared the performance of both soft-ensemble and hardvoting methods on the dataset to determine which method yielded the best results, see Table 3.
The approach we used involves each model producing a probability score whether it is sexist or not for each instance, and these scores are then combined using a weighted average to produce the final prediction. This method has been shown to be effective in improving the accuracy of predictions compared to using a single model (Risch and Krestel, 2020;Dang et al., 2020;Briskilal and Subalalitha, 2022).
Furthermore, we also explore the ability to eliminate some of the models randomly to increase the overall score. Our approach involves generating a random number and using it to select a corresponding number of models from a list of 18 models. We observed a slight improvement in performance when using this approach, but it takes time to see a slight improvement in the results because it is resource-intensive. Therefore, we did not report or submit the output of this method, but we will keep it for further research opportunities.
Results
The obtained results of the individual models and the ensemble approaches are presented in Table 2 and Table 3.
Our results indicate that ensembling multiple models can be an effective approach for identifying sexist text. The combination of different models provides a more robust solution, leveraging the strengths of each individual model. It is possible that adding even more models to the ensemble would lead to further improvements in performance.
During our error analysis, we found that our en- Table 3: Results reported based on the development and test sets. The calculation of the prediction with the highest probability involves selecting the prediction from the most confident model, determined by the predicted probability of each label. semble approach had difficulty identifying certain types of sexist sentences. Specifically, we struggled with detecting sexist language that involves immutable gender differences and gender stereotypes, as well as incitement and encouragement of harm. These types of sentences often involve subtle language use and required a deeper understanding of the context and underlying social issues.
Additionally, we found that our approach had difficulty with detecting dehumanizing attacks and overt sexual objectification. These types of language often involved explicit references to the target's body and required a fine-grained analysis of the language use.
Conclusion
In this research, we investigated the use of an ensemble approach for identifying sexist text. Our approach combined multiple transformer models, including different variations of DeBERTa-v3-base, and BERT. We found that the ensemble approach outperformed the individual models, achieving 84.92% f1-score on the development set of the Semeval task and 84.55% on the testing set.
Our results demonstrate the effectiveness of combining multiple models to identify sexist text. The combination of different models provides a more robust solution, leveraging the strengths of each individual model. Our approach also showed that pre-training some of the models on the unlabeled data provided by the task can be an effective way of incorporating relevant information.
Additionally, we explored the impact of data augmentation on the results. Our findings indicate that data augmentation techniques, such as using online data on the same topic, did not result in a significant improvement in performance. The improvement in results came from ensembling the models rather than data augmentation.
Our conclusion highlights the importance of ensembling multiple models for the task of identifying sexist text. Further research is needed to determine the optimal number of models to include in an ensemble and to improve the performance of the ensemble approach.
We discovered that our system struggles with identifying certain forms of subtle and implicit sexism, which is a common challenge in detecting sexist language. In future work, we plan to explore additional feature engineering techniques and alternative methods for model selection to further improve the performance of our ensemble system.
Overall, this research provides valuable insights into the use of an ensemble approach for identifying sexist text. The findings have important implications for future work in this area and demonstrate the potential for using ensembles to tackle complex NLP tasks. you need. Advances in neural information processing systems, 30.
1649
Table 1 :
1Number of Sentences in Training, Development, and Testing Datasets
Table 2 :
2The 18 models used in our experiments. Results are based on the development data set. All models can be found at https://huggingface.co 18 models 84.74% 84.46% prediction with highest probability Ensemble of 18 models 85.05% 84.45% hard-ensemble method Ensemble of 18 models 84.92% 84.55% soft-ensemble methodModels
Dev
Test
Notes
Ensemble of
An ensemble model for classifying idioms and literal texts using bert and roberta. Information Processing & Management. J Briskilal, Subalalitha, 59102756J Briskilal and CN Subalalitha. 2022. An ensemble model for classifying idioms and literal texts using bert and roberta. Information Processing & Manage- ment, 59(1):102756.
Sexism identification using bert and data augmentation-exist2021. Sabur Butt, Noman Ashraf, Grigori Sidorov, Alexander F Gelbukh, Iber-LEF@ SEPLN. Sabur Butt, Noman Ashraf, Grigori Sidorov, and Alexander F Gelbukh. 2021. Sexism identification using bert and data augmentation-exist2021. In Iber- LEF@ SEPLN, pages 381-389.
Hatebert: Retraining bert for abusive language detection in english. Tommaso Caselli, Valerio Basile, Jelena Mitrović, Michael Granitzer, arXiv:2010.12472arXiv preprintTommaso Caselli, Valerio Basile, Jelena Mitrović, and Michael Granitzer. 2020. Hatebert: Retraining bert for abusive language detection in english. arXiv preprint arXiv:2010.12472.
Ensemble bert for classifying medication-mentioning tweets. Huong Dang, Kahyun Lee, Sam Henry, Ozlem Uzuner, Proceedings of the Fifth Social Media Mining for Health Applications Workshop & Shared Task. the Fifth Social Media Mining for Health Applications Workshop & Shared TaskHuong Dang, Kahyun Lee, Sam Henry, and Ozlem Uzuner. 2020. Ensemble bert for classifying medication-mentioning tweets. In Proceedings of the Fifth Social Media Mining for Health Applica- tions Workshop & Shared Task, pages 37-41.
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Semeval-2022 task 5: Multimedia automatic misogyny identification. Elisabetta Fersini, Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Berta Chulvi, Paolo Rosso, Alyssa Lees, Jeffrey Sorensen, Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). the 16th International Workshop on Semantic Evaluation (SemEval-2022)Elisabetta Fersini, Francesca Gasparini, Giulia Rizzi, Aurora Saibene, Berta Chulvi, Paolo Rosso, Alyssa Lees, and Jeffrey Sorensen. 2022. Semeval-2022 task 5: Multimedia automatic misogyny identification. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022), pages 533- 549.
Umuteam at exist 2021: Sexist language identification based on linguistic features and transformers in spanish and english. José Antonio García-Díaz, Ricardo Colomo-Palacios, Rafael Valencia-García, José Antonio García-Díaz, Ricardo Colomo-Palacios, and Rafael Valencia-García. 2021. Umuteam at exist 2021: Sexist language identification based on linguis- tic features and transformers in spanish and english.
Pengcheng He, Xiaodong Liu, Jianfeng Gao, Weizhu Chen, arXiv:2006.03654Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprintPengcheng He, Xiaodong Liu, Jianfeng Gao, and Weizhu Chen. 2020. Deberta: Decoding-enhanced bert with disentangled attention. arXiv preprint arXiv:2006.03654.
Amikul Kalra, Arkaitz Zubiaga, arXiv:2111.03612Sexism identification in tweets and gabs using deep neural networks. arXiv preprintAmikul Kalra and Arkaitz Zubiaga. 2021. Sexism iden- tification in tweets and gabs using deep neural net- works. arXiv preprint arXiv:2111.03612.
SemEval-2023 Task 10: Explainable Detection of Online Sexism. Wenjie Hannah Rose Kirk, Bertie Yin, Paul Vidgen, Röttger, 10.48550/arXiv.2303.04222Proceedings of the 17th International Workshop on Semantic Evaluation (SemEval-2023). the 17th International Workshop on Semantic Evaluation (SemEval-2023)Association for Computational LinguisticsHannah Rose Kirk, Wenjie Yin, Bertie Vidgen, and Paul Röttger. 2023. SemEval-2023 Task 10: Explainable Detection of Online Sexism. In Proceedings of the 17th International Workshop on Semantic Evalua- tion (SemEval-2023). Association for Computational Linguistics.
Bart: Denoising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. Mike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, Luke Zettlemoyer, arXiv:1910.13461arXiv preprintMike Lewis, Yinhan Liu, Naman Goyal, Marjan Ghazvininejad, Abdelrahman Mohamed, Omer Levy, Ves Stoyanov, and Luke Zettlemoyer. 2019. Bart: De- noising sequence-to-sequence pre-training for natural language generation, translation, and comprehension. arXiv preprint arXiv:1910.13461.
Yinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Mandar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, Veselin Stoyanov, arXiv:1907.11692Roberta: A robustly optimized bert pretraining approach. arXiv preprintYinhan Liu, Myle Ott, Naman Goyal, Jingfei Du, Man- dar Joshi, Danqi Chen, Omer Levy, Mike Lewis, Luke Zettlemoyer, and Veselin Stoyanov. 2019. Roberta: A robustly optimized bert pretraining ap- proach. arXiv preprint arXiv:1907.11692.
Automatic sexism detection with multilingual transformer models. Schütz Mina, Liakhovets Boeck Jaqueline, Slijepčević Daria, Kirchknopf Djordje, Hecht Armin, Bogensperger Manuel, Schlarb Johannes, Sven, Alexander Schindler, Zeppelzauer Matthias, arXiv:2106.04908arXiv preprintSchütz Mina, Boeck Jaqueline, Liakhovets Daria, Sli- jepčević Djordje, Kirchknopf Armin, Hecht Manuel, Bogensperger Johannes, Schlarb Sven, Schindler Alexander, and Zeppelzauer Matthias. 2021. Auto- matic sexism detection with multilingual transformer models. arXiv preprint arXiv:2106.04908.
Bagging bert models for robust aggression identification. Julian Risch, Ralf Krestel, Proceedings of the Second Workshop on Trolling, Aggression and Cyberbullying. the Second Workshop on Trolling, Aggression and CyberbullyingJulian Risch and Ralf Krestel. 2020. Bagging bert mod- els for robust aggression identification. In Proceed- ings of the Second Workshop on Trolling, Aggression and Cyberbullying, pages 55-61.
Overview of exist 2021: sexism identification in social networks. Francisco Rodríguez-Sánchez, Jorge Carrillo-De Albornoz, Laura Plaza, Julio Gonzalo, Paolo Rosso, Miriam Comet, Trinidad Donoso, Procesamiento del Lenguaje Natural. 67Francisco Rodríguez-Sánchez, Jorge Carrillo-de Al- bornoz, Laura Plaza, Julio Gonzalo, Paolo Rosso, Miriam Comet, and Trinidad Donoso. 2021. Overview of exist 2021: sexism identification in so- cial networks. Procesamiento del Lenguaje Natural, 67:195-207.
Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. Victor Sanh, Lysandre Debut, Julien Chaumond, Thomas Wolf, abs/1910.01108ArXiv. Victor Sanh, Lysandre Debut, Julien Chaumond, and Thomas Wolf. 2019. Distilbert, a distilled version of bert: smaller, faster, cheaper and lighter. ArXiv, abs/1910.01108.
Boosting text classification performance on sexist tweets by text augmentation and text generation using a combination of knowledge graphs. Sima Sharifirad, Borna Jafarpour, Stan Matwin, Proceedings of the 2nd workshop on abusive language online (ALW2). the 2nd workshop on abusive language online (ALW2)Sima Sharifirad, Borna Jafarpour, and Stan Matwin. 2018. Boosting text classification performance on sexist tweets by text augmentation and text genera- tion using a combination of knowledge graphs. In Proceedings of the 2nd workshop on abusive lan- guage online (ALW2), pages 107-114.
. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, Attention is all. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all |
10,816,943 | Development of an automatic trend exploration system using the MuST data collection | The automatic extraction of trend information from text documents such as newspaper articles would be useful for exploring and examining trends. To enable this, we used data sets provided by a workshop on multimodal summarization for trend information (the MuST Workshop) to construct an automatic trend exploration system. This system first extracts units, temporals, and item expressions from newspaper articles, then it extracts sets of expressions as trend information, and finally it arranges the sets and displays them in graphs. For example, when documents concerning the politics are given, the system extracts "%" and "Cabinet approval rating" as a unit and an item expression including temporal expressions. It next extracts values related to "%". Finally, it makes a graph where temporal expressions are used for the horizontal axis and the value of percentage is shown on the vertical axis. This graph indicates the trend of Cabinet approval rating and is useful for investigating Cabinet approval rating. Graphs are obviously easy to recognize and useful for understanding information described in documents. In experiments, when we judged the extraction of a correct graph as the top output to be correct, the system accuracy was 0.2500 in evaluation A and 0.3334 in evaluation B. (In evaluation A, a graph where 75% or more of the points were correct was judged to be correct; in evaluation B, a graph where 50% or more of the points were correct was judged to be correct.) When we judged the extraction of a correct graph in the top five outputs to be correct, accuracy rose to 0.4167 in evaluation A and 0.6250 in evaluation B. Our system is convenient and effective because it can output a graph that includes trend information at these levels of accuracy when given only a set of documents as input. | [] | Development of an automatic trend exploration system using the MuST data collection
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2006. 2006
Masaki Murata murata@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Soraku-gun619-0289KyotoJapan
Ryukoku University Otsu
520-2194ShigaJapan
Qing Ma
Ryukoku University Otsu
520-2194ShigaJapan
Toshiyuki Kanamaru 1kanamaru@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Soraku-gun619-0289KyotoJapan
Kyoto University
Yoshida-nihonmatsu-cho, Sakyo-ku606-8501KyotoJapan
Hitoshi Isahara isahara@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Soraku-gun619-0289KyotoJapan
Koji Ichii ichiikoji@hiroshima-u.ac.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Soraku-gun619-0289KyotoJapan
Hiroshima University
1-4-1 Kagamiyama, Higashi-hiroshima739-8527HiroshimaJapan
Tamotsu Shirado shirado@nict.go.jp
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Soraku-gun619-0289KyotoJapan
Sachiyo Tsukawaki
National Institute of Information and Communications Technology
3-5 Hikaridai, Seika-cho, Soraku-gun619-0289KyotoJapan
Development of an automatic trend exploration system using the MuST data collection
Proceedings of the Workshop on Information Extraction Beyond The Document
the Workshop on Information Extraction Beyond The DocumentSydneyAssociation for Computational LinguisticsJuly 2006. 2006
The automatic extraction of trend information from text documents such as newspaper articles would be useful for exploring and examining trends. To enable this, we used data sets provided by a workshop on multimodal summarization for trend information (the MuST Workshop) to construct an automatic trend exploration system. This system first extracts units, temporals, and item expressions from newspaper articles, then it extracts sets of expressions as trend information, and finally it arranges the sets and displays them in graphs. For example, when documents concerning the politics are given, the system extracts "%" and "Cabinet approval rating" as a unit and an item expression including temporal expressions. It next extracts values related to "%". Finally, it makes a graph where temporal expressions are used for the horizontal axis and the value of percentage is shown on the vertical axis. This graph indicates the trend of Cabinet approval rating and is useful for investigating Cabinet approval rating. Graphs are obviously easy to recognize and useful for understanding information described in documents. In experiments, when we judged the extraction of a correct graph as the top output to be correct, the system accuracy was 0.2500 in evaluation A and 0.3334 in evaluation B. (In evaluation A, a graph where 75% or more of the points were correct was judged to be correct; in evaluation B, a graph where 50% or more of the points were correct was judged to be correct.) When we judged the extraction of a correct graph in the top five outputs to be correct, accuracy rose to 0.4167 in evaluation A and 0.6250 in evaluation B. Our system is convenient and effective because it can output a graph that includes trend information at these levels of accuracy when given only a set of documents as input.
Abstract
The automatic extraction of trend information from text documents such as newspaper articles would be useful for exploring and examining trends. To enable this, we used data sets provided by a workshop on multimodal summarization for trend information (the MuST Workshop) to construct an automatic trend exploration system. This system first extracts units, temporals, and item expressions from newspaper articles, then it extracts sets of expressions as trend information, and finally it arranges the sets and displays them in graphs. For example, when documents concerning the politics are given, the system extracts "%" and "Cabinet approval rating" as a unit and an item expression including temporal expressions. It next extracts values related to "%". Finally, it makes a graph where temporal expressions are used for the horizontal axis and the value of percentage is shown on the vertical axis. This graph indicates the trend of Cabinet approval rating and is useful for investigating Cabinet approval rating. Graphs are obviously easy to recognize and useful for understanding information described in documents. In experiments, when we judged the extraction of a correct graph as the top output to be correct, the system accuracy was 0.2500 in evaluation A and 0.3334 in evaluation B. (In evaluation A, a graph where 75% or more of the points were correct was judged to be correct; in evaluation B, a graph where 50% or more of the points were correct was judged to be correct.) When we judged the extraction of a correct graph in the top five outputs to be correct, accuracy rose to 0.4167 in evaluation A and 0.6250 in evaluation B. Our system is convenient and effective because it can output a graph that includes trend information at these levels of accuracy when given only a set of documents as input.
Introduction
We have studied ways to automatically extract trend information from text documents, such as newspaper articles, because such a capability will be useful for exploring and examining trends. In this work, we used data sets provided by a workshop on multimodal summarization for trend information (the MuST Workshop) to construct an automatic trend exploration system. This system firsts extract units, temporals, and item expressions from newspaper articles, then it extract sets of expressions as trend information, and finally it arranges the sets and displays them in graphs. For example, when documents concerning the politics are given, the system extracts "%" and "Cabinet approval rating" as a unit and an item expression including temporal expressions. It next extracts values related to "%". Finally, it makes a graph where temporal expressions are used for the horizontal axis and the value of percentage is shown on the vertical axis. This graph indicates the trend of Cabinet approval rating and is useful for investigating Cabinet approval rating. Graphs are obviously easy to recognize and useful for understanding information described in documents. Kato et al. organized the workshop on multimodal summarization for trend information (the MuST Workshop) (Kato et al., 2005). In this workshop, participants were given data sets consisting of newspaper documents (editions of the Mainichi newspaper from 1998 and 1999 (Japanese documents)) that included trend information for various domains. In the data, tags for important expressions (e.g. temporals, numerical expressions, and item expressions) were tagged manually. 1 The 20 topics of the data sets (e.g., the 1998 home-run race to break the all-time Major League record, the approval rating for the Japanese Cabinet, and news on typhoons) were provided. Trend information was defined as information regarding the change in a value for a certain item. A change in the number of home runs hit by a certain player or a change in the approval rating for the Cabinet are examples of trend information. In the workshop, participants could freely use the data sets for any study they chose to do.
The MuST Workshop
System
Structure of the system
Our automatic trend exploration system consists of the following components.
Component to extract important expressions
First, documents related to a certain topic are given to the system, which then extracts important expressions that will be used to extract and merge trend information. The system extracts item units, temporal units, and item expressions as important expressions.
Here, important expressions are defined as expressions that play important roles in a given document set. Item expressions are defined as expressions that are strongly related to the content of a given document set.
1a. Component to extract important item units
The system extracts item units that will be used to extract and merge trend information. For example, when documents concerning the home-run race are given, "hon" or "gou" (the Japanese item units for the number of home runs) such as in "54 hon" (54th home run) are extracted. 1b. Component to extract important temporal units The system extracts temporal units that will also be used to extract and merge trend information.
For example, the system extracts temporal units such as "nichi" (day), "gatsu" (month), and "nen" (year). In Japanese, temporal units are used to express dates, such as in "2006 nen, 3 gatsu, 27 nichi" for March 27th, 2006. 1c. Component to extract important item expressions The system extracts item expressions that will also be used to extract and merge trend information. For example, the system extracts expressions that are objects for trend exploration, such as "McGwire" and "Sosa" as item expressions in the case of documents concerning the home-run race.
Component to extract trend information sets
The system identifies the locations in sentences where a temporal unit, an item unit, and an item expression that was extracted by the component to extract important expressions appear in similar sentences and extracts sets of important expressions described by the sentences as a trend information set. The system also extracts numerical values appearing with item units or temporal units, and uses the connection of the numerical values and the item units or temporal units as numerical expressions or temporal expressions.
For example, in the case of documents concerning the home-run race, the system extracts a set consisting of "item expression:
McGwire", "temporal expression: 11 day" (the 11th), and "numerical expression: 47 gou" (47th home run) as a trend information set.
3. Component to extract and display important trend information sets
The system gathers the extracted trend information sets and displays them as graphs or by highlighting text displays.
For example, for documents concerning the home-run race, the system displays as graphs the extracted trend information sets for "McGwire" . In these graphs, temporal expressions are used for the horizontal axis and the number of home runs is shown on the vertical axis.
Component to extract important expressions
The system extracts important expressions that will be used to extract trend information sets. Important expressions belong to one of the following categories.
• item units
• temporal units • item expressions
We use ChaSen (Matsumoto et al., 1999), a Japanese morphological analyzer, to extract expressions. Specifically, we use the parts of speeches in the ChaSen outputs to extract the expressions.
The system extracts item units, temporal units, and item expressions by using manually constructed rules using the parts of speeches. The system extracts a sequence of nouns adjacent to numerical values as item units. It then extracts expressions from among the item units which include an expression regarding time or date (e.g., "year", "month", "day", "hour", or "second") as temporal units. The system extracts a sequence of nouns as item expressions.
The system next extracts important item units, temporal units, and item expressions that play important roles in the target documents.
The following three methods can be used to extract important expressions. The system uses one of them. The system judges that an expression producing a high value from the following equations is an important expression.
• Equation for the TF numerical term in Okapi (Robertson et al., 1994)
Score = i∈Docs T F i T F i + l i ∆ (1) • Use of total word frequency Score = i∈Docs T F i (2)
• Use of total frequency of documents where a word appears
Score = i∈Docs 1(3)
In these equations, i is the ID (identification number) of a document, Docs is a set of document IDs, T F i is the occurrence number of an expression in document i, l is the length of document i, and ∆ is the average length of documents in Docs.
To extract item expressions, we also applied a method that uses the product of the occurrence number of an expression in document i and the length of the expression as T F i , so that we could extract longer expressions.
Component to extract trend information sets
The system identifies the locations in sentences where a temporal unit, an item unit, and an item expression extracted by the component to extract important expressions appears in similar sentences and extracts sets of important expressions described by the sentences as a trend information set. When more than one trend information set appears in a document, the system extracts the one that appears first. This is because important and new things are often described in the beginning of a document in the case of newspaper articles.
Component to extract and display important trend information sets
The system gathers the extracted trend information sets and displays them in graphs or as highlighted text. In the graphs, temporal expressions are used for the horizontal axis and numerical expressions are used for the vertical axis. The system also displays sentences used to extract trend information sets and highlights important expressions in the sentences.
The system extracts multiple item units, temporal units, and item expressions (through the component to extract important expressions) and uses these to make all possible combinations of the three kinds of expression. The system extracts trend information sets for each combination and calculates the value of one of the following equations for each combination. The system judges that the combination producing a higher value represents more useful trend information. The following four equations can be used for this purpose, and the system uses one of them.
• Method 1 -Use both the frequency of trend information sets and the scores of important expressions
M = F req × S 1 × S 2 × S 3(4)
• Method 2 -Use both the frequency of trend information sets and the scores of important expressions
M = F req × (S 1 × S 2 × S 3 ) 1 3(5)
• Method 3 -Use the frequency of trend information sets
M = F req(6)
• Method 4 -Use the scores of important expressions
M = S 1 × S 2 × S 3(7)
In these equations, F req is the number of trend information sets extracted as described in Section 3.3, and S1, S2, and S3 are the values of Score as calculated by the corresponding equation in Section 3.2.
The system extracts the top five item units, the top five item expressions, and the top three temporal units through the component to extract important expressions and forms all possible combinations of these (75 combinations). The system then calculates the value of the above equations for these 75 combinations and judges that a combination having a larger value represents more useful trend information.
Experiments and Discussion
We describe some examples of the output of our system in Sections 4.1, 4.2, and 4.3, and the results from our system evaluation in Section 4.4. We made experiments using Japanese newspaper articles.
Extracting important expressions
To extract important expressions we applied the equation for the TF numerical term in Okapi and the method using the product of the occurrence number for an expression and the length of the expression as T F i for item expressions. We did experiments using the three document sets for typhoons, the Major Leagues, and political trends. The results are shown in Table 1.
We found that appropriate important expressions were extracted for each domain. For example, in the data set for typhoons, "typhoon" was extracted as an important item expression and an item unit "gou" (No.), indicating the ID number of each typhoon, was extracted as an important item unit. In the data set for the Major Leagues, the MuST data included documents describing the home-run race between Mark McGwire and Sammy Sosa in 1998. "McGwire" and "Sosa" were properly extracted among the higher ranks. "gou" (No.) and "hon" (home run(s)), important item units for the home-run race, were properly extracted. In the data set for political trends, "naikaku shiji ritsu" (cabinet approval rating) was properly extracted as an item expression and "%" was extracted as an item unit.
Graphs representing trend information
We next tested how well our system graphed the trend information obtained from the MuST data sets. We used the same three document sets as in the previous section. As important expressions in the experiments, we used the item unit, the temporal unit, and the item expression with the highest scores (the top ranked ones) which were extracted by the component to extract important expressions using the method described in the previous section. The system made the graphs using the component to extract trend information sets and the component to extract and display important trend information sets. The graphs thus produced are shown in Figs. 1, 2, and 3. (We used Excel to draw these graphs.) Here, we made a temporal axis for each temporal expression. However, we can also For the typhoon data set, gou (No.), nichi (day), and taihuu (typhoon) were respectively extracted as the top ranked item unit, temporal unit, and item expression. The system extracted trend information sets using these, and then made a graph where the temporal expression (day) was used for the horizontal axis and the ID numbers of the typhoons were shown on the vertical axis. The MuST data included data for September and October of 1998 and 1999. Figure 1 is useful for seeing when each typhoon hit Japan during the typhoon season each year. Comparing the 1998 data with that of 1999 reveals that the number of typhoons increased in 1999.
For the Major Leagues data set, gou (No.), nichi (day), and Maguwaia (McGwire) were extracted with the top rank. The system used these to make a graph where the temporal expression (day) was used for the horizontal axis and the cumulative number of home runs hit by McGwire was shown on the vertical axis (Fig. 2). The MuST data included data beginning in August, 1998. The graph shows some points where the cumulative number of home runs decreased (e.g., September Figure 3: Trend graph for the political trends data set 4th), which was obviously incorrect. This was because our system wrongly extracted the number of home runs hit by Sosa when this was given close to McGwire's total.
In the political trends data set, %, gatsu (month), and naikaku shiji ritsu (cabinet approval rating) were extracted with the top rankings. The system used these to make a graph where the temporal expression (month) was used for the horizontal axis and the Cabinet approval rating (Japanese Cabinet) was shown as a percentage on the vertical axis. The MuST data covered 1998 and 1999. Figure 2 shows the cabinet approval rating of the Obuchi Cabinet. We found that the overall approval rating trend was upwards. Again, there were some errors in the extracted trend information sets. For example, although June was handled correctly, the system wrongly extracted May as a temporal expression from the sentence "in comparison to the previous investigation in May".
Sentence extraction and highlighting display
We then tested the sentence extraction and highlighting display with respect to trend information using the MuST data set; in this case, we used the typhoon data set. As important expressions, we used the item unit, the temporal unit, and the item expression extracted with the highest scores (the top ranked ones) by the component to extract important expressions using the method described in the previous section. Gou (No.), nichi (day), and taihuu (typhoon) were respectively extracted as an item unit, a temporal unit, and an item expression. The system extracted sentences including the three expressions and highlighted these expressions in the sentences. The results are shown in Figure 4. The first trend information sets to ap- Sept. 24, 1999 No. 18 Medium-scale and strong Typhoon No. 18 made landfall in the north of Kumamoto Prefecture around 6:00 a.m. on the 24th, and after moving to Suo-Nada made another landfall at Ube City in Yamaguchi Prefecture before 9:00 p.m., tracked through the Chugoku district, and then moved into the Japan Sea after 10:00 p.m. Sept. 25, 1999 No. 18 Typhoon No. 18, which caused significant damage in the Kyushu and Chugoku districts, weakened and made another landfall before moving into the Sea of Okhotsk around 10:00 a.m. on the 25th. Figure 4: Sentence extraction and highlighting display for the typhoon data set pear are underlined twice and the other sets are underlined once. (In the actual system, color is used to make this distinction.) The extracted temporal expressions and numerical expressions are presented in the upper part of the extracted sentence. The graphs shown in the previous section were made by using these temporal expressions and numerical expressions. The extracted sentences plainly described the state of affairs regarding the typhoons and were important sentences. For the research being done on summarization techniques, this can be considered a useful means of extracting important sentences. The extracted sentences typically describe the places affected by each typhoon and whether there was any damage. They contain important descriptions about each typhoon. This confirmed that a simple method of extracting sentences containing an item unit, a temporal unit, and an item expression can be used to extract important sentences.
The fourth sentence in the figure includes information on both typhoon no.7 and typhoon no.8. We can see that there is a trend information set other than the extracted trend information set (underlined twice) from the expressions that are underlined once. Since the system sometimes extracts incorrect trend information sets, the highlighting is useful for identifying such sets.
Evaluation
We used a closed data set and an open data set to evaluate our system. The closed data set was the data set provided by the MuST workshop organizer and contained 20 domain document sets. The data sets were separated for each domain.
We made the open data set based on the MuST data set using newspaper articles (editions of the Mainichi newspaper from 2000 and 2001). We made 24 document sets using information retrieval by term query. We used documents retrieved by term query as the document set of the domain for each query term.
We used the closed data set to adjust our system and used the open data set to calculate the evaluation scores of our system for evaluation.
We judged whether a document set included the information needed to make trend graphs by consulting the top 30 combinations of three kinds of important expression having the 30 highest values as in the method of Section 3.4. There were 19 documents including such information in the open data. We used these 19 documents for the following evaluation.
In the evaluation, we examined how accurately trend graphs could be output when using the top ranked expressions. The results are shown in Table 2. The best scores are described using bold fonts for each evaluation score.
We used five evaluation scores. MRR is the average of the score where 1/r is given as the score when the rank of the first correct output is r (Murata et al., 2005b). TP1 is the average of the precision in the first output. TP5 is the average of the precision where the system includes a correct output in the first five outputs. RP is the average of the r-precision and AP is the average of the average precision. (Here, the average means that the evaluation score is calculated for each domain data set and the summation of these scores divided by the number of the domain data sets is the average.) R-precision is the precision of the r outputs where r is the number of correct answers. Average precision is the average of the precision when each correct answer is output (Murata et al., 2000). The r-precision indicates the precision where the recall and the precision have the same value. The precision is the ratio of correct answers in the system output. The recall is the ratio of correct answers in the system output to the total number of correct answers.
Methods 1 to 4 in Table 2 are the methods used to extract useful trend information described in Section 3.4. Use of the expression length means the product of the occurrence number for an expression and the length of the expression was used to calculate the score for an important item expression. No use of the expression length means this product was not used and only the occurrence number was used.
To calculate the r-precision and average precision, we needed correct answer sets. We made the correct answer sets by manually examining the top 30 outputs for the 24 (= 4 × 6) methods (the combinations of methods 1 to 4 and the use of Equations 1 to 3 with or without the expression length) and defining the useful trend information among them as the correct answer sets.
In evaluation A, a graph where 75% or more of the points were correct was judged to be correct. In evaluation B, a graph where 50% or more of the points were correct was judged to be correct. From the experimental results, we found that the method using the total frequency for a word (Equation 2) and the length of an expression was best for calculating the scores of important expressions.
Using the length of an expression was important. (The way of using the length of an expression was described in the last part of Section 3.2.) For example, when "Cabinet approval rating" appears in documents, a method without expression lengths extracts "rating". When the system extracts trend information sets using "rating", it extracts wrong information related to types of "rating" other than "Cabinet approval rating". This hinders the extraction of coherent trend information. Thus, it is beneficial to use the length of an expression when extracting important item expressions.
We also found that method 1 (using both the frequency of the trend information sets and the scores of important expressions) was generally the best.
When we judged the extraction of a correct graph as the top output in the experiments to be correct, our best system accuracy was 0.3158 in evaluation A and 0.4211 in evaluation B. When we judged the extraction of a correct graph in the top five outputs to be correct, the best accuracy rose to 0.5263 in evaluation A and 0.7895 in evaluation B. In terms of the evaluation scores for the 24 original data sets (these evaluation scores were multiplied by 19/24), when we judged the extraction of a correct graph as the top output in the experiments to be correct, our best system accuracy was 0.3158 in evaluation A and 0.4211 in evaluation B. When we judged the extraction of a correct graph in the top five outputs to be correct, the best accuracy rose to 0.5263 in evaluation A and 0.7895 in evaluation B. Our system is convenient and effective because it can output a graph that includes trend information at these levels of accuracy when given only a set of documents as input.
As shown in Table 2, the best values for RP (which indicates the precision where the recall and the precision have the same value) and AP were 0.2127 and 0.1705, respectively, in evaluation B.
This RP value indicates that our system could extract about one out of five graphs among the correct answers when the recall and the precision had the same value. Fujihata et al. (Fujihata et al., 2001) developed a system to extract numerical expressions and their related item expressions by using syntactic information and patterns. However, they did not deal with the extraction of important expressions or gather trend information sets. In addition, they did not make a graph from the extracted expressions.
Related studies
Nanba et al. (Nanba et al., 2005) took an approach of judging whether the sentence relationship indicates transition (trend information) or renovation (revision of information) and used the judgment results to extract trend information. They also constructed a system to extract numerical information from input numerical units and make a graph that includes trend information. However, they did not consider ways to extract item numerical units and item expressions automatically.
In contrast to these systems, our system automatically extracts item numerical units and item expressions that each play an important role in a given document set. When a document set for a certain domain is given, our system automatically extracts item numerical units and item expressions, then extracts numerical expressions related to these, and finally makes a graph based on the extracted numerical expressions. When a document set is given, the system automatically makes a graph that includes trend information. Our system also uses an original method of producing more than one graphs and selecting an appropriate graph among them using Methods 1 to 4, which Fujihata et al. and Namba et al. did not use.
Conclusion
We have studied the automatic extraction of trend information from text documents such as newspaper articles. Such extraction will be useful for exploring and examining trends. We used data sets provided by a workshop on multimodal summarization for trend information (the MuST Workshop) to construct our automatic trend exploration system. This system first extracts units, temporals, and item expressions from newspaper articles, then it extracts sets of expressions as trend information, and finally it arranges the sets and displays them in graphs.
In our experiments, when we judged the extraction of a correct graph as the top output to be correct, the system accuracy was 0.2500 in evaluation A and 0.3334 in evaluation B. (In evaluation A, a graph where 75% or more of the points were correct was judged to be correct; in evaluation B, a graph where 50% or more of the points were correct was judged to be correct.) When we judged the extraction of a correct graph in the top five outputs to be correct, we obtained accuracy of 0.4167 in evaluation A and 0.6250 in evaluation B. Our system is convenient and effective because it can output a graph that includes trend information at these levels of accuracy when only a set of documents is provided as input.
In the future, we plan to continue this line of study and improve our system. We also hope to apply the method of using term frequency in documents to extract trend information as reported by Murata et al. (Murata et al., 2005a).
Figure 1 :Figure 2 :
12Trend graph for the typhoon data set Trend graph for the Major Leagues data set display a graph where regular temporal intervals are used in the temporal axis.
0
00.3904 0.3158 0.4737 0.1422 0.1154 0.5746 0.4211 0.7368 0.2127 0.1674 Method 2 0.3877 0.3158 0.4737 0.1422 0.1196 0.5544 0.4211 0.7368 0.2127 0.1723 Method 3 0.3895 0.3158 0.5263 0.1422 0.1202 0.5491 0.4211 0.7895 0.2127
Table 1 :
1Examples of extracting important expressionsTyphoon
item units
temporal units
item expressions
gou
nichi
taihuu
(No.)
(day)
(typhoon)
me-toru
ji
gogo
(meter(s))
(o'clock)
(afternoon)
nin
jigoro
higai
(people)
(around x o'clock)
(damage)
kiro
fun
shashin setsumei
(kilometer(s))
(minute(s))
(photo caption)
miri
jisoku
chuushin
(millimeter(s))
(per hour)
(center)
Major League
item units
temporal units
item expressions
gou
nichi
Maguwaia
(No.)
(day)
(McGwire)
hon
nen
honruida
(home run(s))
(year)
(home run)
kai
gatsu
Ka-jinarusu
(inning(s))
(month)
(Cardinals)
honruida
nen buri
Ma-ku Maguwaia ichiruishu
(home run(s))
(after x year(s) interval) (Mark McGwire, the first baseman)
shiai
fun
So-sa
(game(s))
(minute(s))
(Sosa)
Political Trend
item units
temporal units
item expressions
%
gatsu
naikaku shiji ritsu
(%)
(month)
(cabinet approval rating)
pointo gen
nichi
Obuchi naikaku
(decrease of x point(s)) (day)
(Obuchi Cabinet)
pointo zou
nen
Obuchi shushou
(increase of x point(s)) (year)
(Prime Minister Obuchi)
dai
kagetu
shijiritsu
(generation)
(month(s))
(approval rating)
pointo
bun no
kitai
(point(s))
(divided)
(expectation)
Oct.17, 1998 No. 10 Medium-scale and medium-strength Typhoon No. 10 made landfall on Makurazaki City in Kagoshima Prefecture around 4:30 p.m. on the 17th, and then moved across the West Japan area after making another landfall near Sukumo City in Kochi Prefecture in the evening.Sept. 15, 1999 No. 16 Small-scale and weak Typhoon No. 16 became extratropical in Nagano Prefecture and moved out to sea off Ibaraki Prefecture on the 15th.Sept. 16, 1998 No. 5
Large-scale and medium-strength Typhoon No. 5 made landfall near Omaezaki in Shizuoka Pre-
fecture before dawn on the 16th, and then moved to the northeast involving the Koshin, Kantou,
and Touhoku areas in the storm.
Sept. 21, 1998 No. 8
Small-scale Typhoon No. 8 made landfall near Tanabe City in Wakayama Prefecture around 4:00
p.m. on the 21st, and weakened while tracking to the northward across Kinki district.
Sept. 22, 1998 No. 7
Typhoon No. 7 made landfall near Wakayama City in the afternoon on the 22nd, and will hit the
Kinki district.
Sept. 21, 1998 No. 8
The two-day consecutive landfall of Typhoon No. 8 on the 21st and Typhoon No. 7 on the 22nd
caused nine deaths and many injuries in a total of six prefectures including Nara, Fukui, Shiga,
and so on.
Aug. 20, 1999 No. 11
The Meteorological Office announced on the 20th that Typhoon No. 11 developed 120 kilometers
off the south-southwest coast of Midway.
Sept. 14, 1999 No. 16
Typhoon No. 16, which developed off the south coast in Miyazaki Prefecture, made landfall near
Kushima City in the prefecture around 5:00 p.m. on the 14th.
Table 2 :
2Experimental results for the open dataEvaluation A
Evaluation B
MRR
TP1
TP5
RP
AP
MRR
TP1
TP5
RP
AP
Use of Equation 1 and the expression length
We do not use manually provided tags for important expressions because our system automatically extracts important expressions.
Extracting of numerical expressions by constraints and default rules of dependency structure. Katsuyuki Fujihata, Masahiro Shiga, Tatsunori Mori, WGNL 145Information Processing Society of JapanKatsuyuki Fujihata, Masahiro Shiga, and Tatsunori Mori. 2001. Extracting of numerical expressions by constraints and default rules of dependency struc- ture. Information Processing Society of Japan, WGNL 145.
MuST: A workshop on multimodal summarization for trend information. Tsuneaki Kato, Mitsunori Matsushita, Noriko Kando, Proceedings of the Fifth NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-Lingual Information Access. the Fifth NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-Lingual Information AccessTsuneaki Kato, Mitsunori Matsushita, and Noriko Kando. 2005. MuST: A workshop on multimodal summarization for trend information. Proceedings of the Fifth NTCIR Workshop Meeting on Evaluation of Information Access Technologies: Information Retrieval, Question Answering and Cross-Lingual Information Access.
Japanese morphological analysis system ChaSen version 2. Yuji Matsumoto, Akira Kitauchi, Tatsuo Yamashita, Yoshitaka Hirano, Hiroshi Matsuda, Masayuki Asahara, 0 manual 2nd editionYuji Matsumoto, Akira Kitauchi, Tatsuo Yamashita, Yoshitaka Hirano, Hiroshi Matsuda, and Masayuki Asahara. 1999. Japanese morphological analysis system ChaSen version 2.0 manual 2nd edition.
Japanese probabilistic information retrieval using location and category information. Masaki Murata, Kiyotaka Uchimoto, Hiromi Ozaku, Qing Ma, Masao Utiyama, Hitoshi Isahara, The Fifth International Workshop on Information Retrieval with Asian Languages. Masaki Murata, Kiyotaka Uchimoto, Hiromi Ozaku, Qing Ma, Masao Utiyama, and Hitoshi Isahara. 2000. Japanese probabilistic information retrieval using location and category information. The Fifth International Workshop on Information Retrieval with Asian Languages, pages 81-88.
Trend survey on Japanese natural language processing studies over the last decade. Masaki Murata, Koji Ichii, Qing Ma, Tamotsu Shirado, Toshiyuki Kanamaru, Hitoshi Isahara, The Second International Joint Conference on Natural Language Processing, Companion Volume to the Proceedings of Conference including Posters/Demos and Tutorial Abstracts. Masaki Murata, Koji Ichii, Qing Ma, Tamotsu Shirado, Toshiyuki Kanamaru, and Hitoshi Isahara. 2005a. Trend survey on Japanese natural language process- ing studies over the last decade. In The Second In- ternational Joint Conference on Natural Language Processing, Companion Volume to the Proceedings of Conference including Posters/Demos and Tutorial Abstracts.
Use of multiple documents as evidence with decreased adding in a Japanese question-answering system. Masaki Murata, Masao Utiyama, Hitoshi Isahara, Journal of Natural Language Processing. 122Masaki Murata, Masao Utiyama, and Hitoshi Isahara. 2005b. Use of multiple documents as evidence with decreased adding in a Japanese question-answering system. Journal of Natural Language Processing, 12(2).
Extraction and visualization of trend information based on the cross-document structure. Hidetsugu Nanba, Yoshinobu Kunimasa, Shiho Fukushima, WGNL. 168Information Processing Society of JapanTeruaki Aizawa, and Manabu OkumuraHidetsugu Nanba, Yoshinobu Kunimasa, Shiho Fukushima, Teruaki Aizawa, and Manabu Oku- mura. 2005. Extraction and visualization of trend information based on the cross-document structure. Information Processing Society of Japan, WGNL 168, pages 67-74.
Okapi at TREC-3. S E Robertson, S Walker, S Jones, M M Hancock-Beaulieu, M Gatford, TREC-3S. E. Robertson, S. Walker, S. Jones, M. M. Hancock- Beaulieu, and M. Gatford. 1994. Okapi at TREC-3. In TREC-3. |
245,855,866 | The LMU Munich Systems for the WMT21 Unsupervised and Very Low-Resource Translation Task | We present our submissions to the WMT21 shared task in Unsupervised and Very Low-Resource machine translation between German and Upper Sorbian, German and Lower Sorbian, and Russian and Chuvash. Our low-resource systems (German↔Upper Sorbian, Russian↔Chuvash) are pre-trained on high-resource pairs of related languages. We fine-tune those systems using the available authentic parallel data and improve by iterated back-translation. The unsupervised German↔Lower Sorbian system is initialized by the best Upper Sorbian system and improved by iterated back-translation using monolingual data only. | [
207880568,
52967399,
201741133,
15349458,
3515219,
189928248,
13751870
] | The LMU Munich Systems for the WMT21 Unsupervised and Very Low-Resource Translation Task
November 10-11, 2021
Jindřich Libovický libovicky@cis.lmu.de
Center for Information and Language Processing LMU Munich
Alexander Fraser fraser@cis.lmu.de
Center for Information and Language Processing LMU Munich
The LMU Munich Systems for the WMT21 Unsupervised and Very Low-Resource Translation Task
Proceedings of the Sixth Conference on Machine Translation (WMT)
the Sixth Conference on Machine Translation (WMT)November 10-11, 2021989
We present our submissions to the WMT21 shared task in Unsupervised and Very Low-Resource machine translation between German and Upper Sorbian, German and Lower Sorbian, and Russian and Chuvash. Our low-resource systems (German↔Upper Sorbian, Russian↔Chuvash) are pre-trained on high-resource pairs of related languages. We fine-tune those systems using the available authentic parallel data and improve by iterated back-translation. The unsupervised German↔Lower Sorbian system is initialized by the best Upper Sorbian system and improved by iterated back-translation using monolingual data only.
Introduction
In this paper, we describe systems for translation between German (de) and Upper Sorbian (hsb), German (de) and Lower Sorbian (dsb), and Russian (ru) and Chuvash (cv) developed at LMU Munich for the WMT21 shared task on unsupervised and very low resource machine translation (MT).
Upper Sorbian is a minority language spoken by around 30,000 people in today's German federal state of Saxony, Lower Sorbian has around 7,000 speakers and is spoken in the German federal state of Brandenburg. With such a small number of speakers, machine translation and automatic processing of Sorbian language is an inherently lowresource problem without any chance that the resources available for Sorbian would ever approach the size of resources for languages spoken by millions of people. On the other hand, being Western Slavic languages related to Czech and Polish, it is possible to take advantage of relatively rich resources collected for these two languages.
Unlike our last year's submission for Upper Sorbian (Libovický et al., 2020), we decided not to use synthetic data from unsupervised translation between Czech and Upper Sorbian and only did iterative back-translation. Despite having more authentic parallel data than last year, our system reaches approximately the same translation quality. Our Upper Sorbian systems ranked third out of six systems in the official ranking.
We leverage the relatedness between the Sorbian languages and use the Upper Sorbian system as a starting point for iterative back-translation using monolingual data only. Our Lower Sorbian Systems ranked second (de→dsb) and third (dbs→de) out of four teams in the official ranking.
Chuvash is a minority language spoken in the Volga region in the southwest of Russia. Although it uses the Cyrillic script, it is not related to eastern Slavic languages, but it is a Turkic language, relatively isolated in the Turkic language family. As a language with the highest number of speakers in this shared task, it also has the highest amount of available parallel data. We adopt a similar approach as for German-Upper Sorbian translation and pretrain our models on the related Kazakh language. In addition, we experiment with character-level models in the hope that they will be particularly effective for agglutinative morphology.
Experimental Setup
Most of our experimental setup is shared across all the language pairs. All our models use the Transformer architecture (Vaswani et al., 2017) as implemented in FairSeq (Ott et al., 2019).
All data is segmented using BPE (Sennrich et al., 2016b) with 16k merge operations as implemented in YouTokenToMe 1 without previous explicit tokenization. The merges are computed using a concatenation of all training data: German, Czech, Upper and Lower Sorbian in the first set of experiments, Russian, Kazakh, and Chuvash in the second set of experiments.
For the supervised task, we first pre-train mod- We upsample the authentic parallel data to match the size of the synthetic data. We keep most default hyperparameters from the predefined architectures in FairSeq (transformer for the Base model, transformer_wmt_en_de_big_t2t for the Big model. The batch size is 6k tokens for the Base models, 2k tokens for Big models on a single GPU, Because we always start with high-resource training, we keep the dropout on the standard value of 0.1.
We use these models to initialize the weights (Nguyen and Chiang, 2017; Kocmi and Bojar, 2018) of the supervised low-resource models without restarting the optimizer. Because the learning rate is already low at that stage of training, we do not need to change the dropout to prevent overfitting. First, we train the supervised models using the authentic parallel data only, then we continue with iterated back-translation. The best Upper Sorbian-to-German model is used to translate Lower Sorbian monolingual data into German. In the next steps, we continue with a standard iterative back-translation procedure for unsupervised neural machine translation (Artetxe et al., 2018;Lample et al., 2018).
Our final submission is an ensemble (with the vote strategy) of the best-scoring systems in the process of iterated back-translation. Language-pairspecific descriptions and results are discussed in the following sections.
We evaluate our systems using the BLEU Score (Papineni et al., 2002), chrF score (Popović, 2015) as implemented in SacreBLEU (Post, 2018). 3 Further, we evaluate the models using BERTScore (Zhang et al., 2020) 4 with XLM-RoBERTa Large (Conneau et al., 2020) as an underlying model for German and Russian and mBERT (Devlin et al., 2019) for Chuvash. Similar to the official task evaluation, we also report for each system the number of significantly worse systems in each metric at the significance level 0.95 with bootstrap resampling (Koehn, 2004) with 1k samples. For each metric, each system receives one point for each system it significantly outperforms in the metric at the significance level of 0.95.
German ↔ Upper Sorbian
Pre-training. For training the German↔Czech systems, we followed the same setup as in our last year's submission (Libovický et al., 2020). We used all parallel datasets from the Opus project (Tiedemann, 2012), which was 15.4M sentences after filtering by length and language identity. We trained a Transformer Base model on this data and used this model to generate back-translation. We used 20M Czech and 20M German sentences from the WMT News Crawl. We mix the back-translated and authentic parallel data one-to-one and train Transformer Big models on it. Sorbian data. We used all Upper Sorbian data provided for the shared task, i.e., 148k parallel sentence pairs (this is 88k sentence pairs more than last year), we did not apply any filtering on the parallel dataset. The development validation and the development test set of 2k sentences were the same as the last year.
Back-translation. We used 15M German sentences from the WMT News Crawl and all available monolingual Upper Sorbian data, 696k sentences, for back-translation. We applied the same rule-based statistical fixing of hyphenation-related OCR errors as the last year (Libovický et al., 2020, § 3.1). To better leverage the limited amount of monolingual data, we sample the Upper Sorbian translations 5×. We iterated the back-translation 4 times, always initializing the model with the Czech-German models (see Figure 1).
Results.
The results are presented in Table 1. In the translation direction into German, the translation quality gradually increased between the backtranslation steps. In the opposite direction, the translation quality oscillated. We attribute this to a larger amount of authentic German sentences. Ensembling only has a negligible effect. Note also that for translation into Sorbian, no differences between the models are statistically significant. In the opposite direction, the BLEU and the chrF score only separate the systems into two clusters, whereas the differences among BERTScores are always significant in the bootstrap testing, even though the absolute score differences are smaller. The best system for translation into German is a single from the last iteration of back-translation despite scoring slightly worse in the BLEU score.
German ↔ Lower Sorbian
Data. Because this is a purely unsupervised task, we did not use any Lower Sorbian parallel data. We used the same German monolingual data as we used for back-translation for Upper Sorbian. We use all the Lower Sorbian monolingual data, 145k sentences, provided by the organizers.
Iterative back-translation. Similarly to Upper Sorbian, we sample the back-translation of Lower Sorbian 10× for higher diversity in the training data.
Results. The final results are tabulated in Table 2. Figure 2 shows the translation quality in terms of chrF score during back-translation iterations. Similar to Upper Sorbian, the direction into German that uses larger monolingual data tends to improve more smoothly than the opposite direction. Also, the ensembling of the three best-scoring systems only has a negligible effect. the ensemble do not significantly differ in any of the metrics.
Russian ↔ Chuvash
Pre-training. Similar to Upper Sorbian systems, we pre-train the systems on high-resource related language pair, Kazakh-Russian. We used the crawled Kazakh-Russian corpus of 5M sentence pairs published for WMT19 (Barrault et al., 2019) to train a Transformer Base model. We used these models to back-translation 3M Kazakh and 3M Russian sentences from the WMT News Crawl from the most recent years.
Chuvash data. We used all parallel data provided by the organizers, 717k sentence pairs, without any filtering. For back-translation, we used all 2.8M monolingual Chuvash sentences provided for the competition. For Russian, we used 18M monolingual sentences from the WMT News Crawl.
Back-translation. We ran two iterations of backtranslation. We sample from the model during backtranslation. We sampled 4 different translations for each Chuvash sentence to increase the training data diversity. We mix the authentic and synthetic parallel training data in the one-to-one ratio. All models are initialized by the Russian↔Kazakh models.
Character models. We further experiment with finetuning the system to the character level. Libovický and Fraser (2020) managed to train a character-level system for another Turkic language, English-to-Turkish translation. Here, we test if this is a property of Turkic languages or an artifact of the dataset English-Turkish dataset. We follow Libovický and Fraser (2020) and finetune the subword model to the character level. Results. The results are presented in Table 3. Compared to other language pairs, back-translation had a surprisingly small effect on the translation quality. We suspect this result might be due to errors in data processing or signalize a need for a better data filtering technique. Model ensembling has no effect here. The character-level systems are on average 2 BLEU points worse than their subword counterparts, which is consistent with the results of character-level models on highresource languages (Libovický and Fraser, 2020). Surprisingly, the character-level models seem to have much larger gains from model ensembling than the subword-based models. In fact, the ensemble of the character-level models is statistically indistinguishable from the best subword-based models.
Conclusions
We presented our systems for low-resourced translation between German and Upper Sorbian, unsu-pervised translation between German and Lower Sorbian, and translation between Chuvash and Russian.
Our systems used standard state-of-the-art techniques for low-resource and unsupervised machine translation but did not exhaust all available methods. Better results could be achieved using more monolingual data and by more careful filtering of the synthetic parallel data.
Figure 3 :
3A diagram of the training procedure of the Russian↔Chuvash. Gray dashed arrows ( ) denote model initialization, solid black arrows ( ) denote syntetic data generation by back-translation.
Table 1 :
1Quantitative results of the German↔Upper Sorbian translation systems on the development test data.
Table 2 :
2Automatic scores for the best German↔Lower Sorbian Systems.1 2 3 4 5 6 7
Iteration
.57
.58
.59
.60
.61
chrF
dsb → de
1 2 3 4 5 6 7 8
Iteration
.52
.53
.54
.55
.56
.57
.58
.59
chrF
de → dsb
Figure 2: chrF scores during iterative back-translation
for unsupervised German↔Lower Sorbian translation.
The orange vertical lines denote 95%-confidence inter-
vals using bootstrap resampling.
Table 3 :
3Quantitative results of the Russian↔Chuvash translation systems on the development test data.
Sergey Edunov, Myle Ott, Michael Auli, and David Grangier. 2018. Understanding back-translation at scale. In Proceedings of the 2018 Conference on Empirical Methods in Natural Language Processing, pages 489-500, Brussels, Belgium. Association for Computational Linguistics. Tom Kocmi and Ondřej Bojar. 2018. Trivial transfer learning for low-resource neural machine translation. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 244-252, Brussels, Belgium. Association for Computational Linguistics. Philipp Koehn. 2004. Statistical significance tests for machine translation evaluation. In Proceedings of the 2004 Conference on Empirical Methods in Natural Language Processing, pages 388-395, Barcelona, Spain. Association for Computational Linguistics. Guillaume Lample, Alexis Conneau, Ludovic Denoyer, and Marc'Aurelio Ranzato. 2018. Unsupervised machine translation using monolingual corpora only. In 6th International Conference on Learning Representations, ICLR 2018, Vancouver, BC, Canada, April 30 -May 3, 2018, Conference Track Proceedings. OpenReview.net. Jindřich Libovický and Alexander Fraser. 2020. Towards reasonably-sized character-level transformer NMT by finetuning subword systems. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing (EMNLP), pages 2572-2579, Online. Association for Computational Linguistics.
https://github.com/VKCOM/YouTokenToMe
We re-used the published code https:// github.com/pytorch/fairseq/tree/master/ examples/backtranslation.
BLEU score signature nrefs:1|case:mixed| eff:no|tok:13a|smooth:exp|version:2.0.0 chrF score signature nrefs:1|case:mixed|eff:yes| nc:6|nw:0|space:no|version:2.0.0 4 https://github.com/Tiiiger/bert_score
AcknowledgmentsThis work was also supported by the European Research Council under the European Union's Horizon 2020 research and innovation program (grant agreement #640550) and by the DFG (grant FR 2829/4-1).A Training hyper-parametersWe use the following command line options for fairseq-train command in all experiments. For the Transformer Base models, we use the pre-defined transformer architecture, for Transformer Big, we use transformer_wmt_en_de_big_t2t. The batch size is 6000 tokens for the Base models and 2000 tokens for the Big models.fairseq-train \ $DATA \ --arch $ARCHITECTURE \ --share-all-embeddings \ --label-smoothing 0.1 \ --criterion \ label_smoothed_cross_entropy \ --optimizer adam \ --adam-betas '(0.9, 0.998)' \ --clip-norm 5.0 \ --lr 5e-4 \ --lr-scheduler inverse_sqrt \ --warmup-updates 16000 \ --max-tokens $TOKENS
Unsupervised neural machine translation. Mikel Artetxe, Gorka Labaka, Eneko Agirre, Kyunghyun Cho, 6th International Conference on Learning Representations, ICLR 2018, Vancouver. BC, CanadaConference Track Proceedings. OpenReview.netMikel Artetxe, Gorka Labaka, Eneko Agirre, and Kyunghyun Cho. 2018. Unsupervised neural ma- chine translation. In 6th International Conference on Learning Representations, ICLR 2018, Vancou- ver, BC, Canada, April 30 -May 3, 2018, Confer- ence Track Proceedings. OpenReview.net.
Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine translation (WMT19). Loïc Barrault, Ondřej Bojar, Marta R Costa-Jussà, Christian Federmann, Mark Fishel, Yvette Graham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, 10.18653/v1/W19-5301Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, Italy2Association for Computational LinguisticsLoïc Barrault, Ondřej Bojar, Marta R. Costa-jussà, Christian Federmann, Mark Fishel, Yvette Gra- ham, Barry Haddow, Matthias Huck, Philipp Koehn, Shervin Malmasi, Christof Monz, Mathias Müller, Santanu Pal, Matt Post, and Marcos Zampieri. 2019. Findings of the 2019 conference on machine transla- tion (WMT19). In Proceedings of the Fourth Con- ference on Machine Translation (Volume 2: Shared Task Papers, Day 1), pages 1-61, Florence, Italy. As- sociation for Computational Linguistics.
Tagged back-translation. Isaac Caswell, Ciprian Chelba, David Grangier, 10.18653/v1/W19-5206Proceedings of the Fourth Conference on Machine Translation. the Fourth Conference on Machine TranslationFlorence, ItalyAssociation for Computational LinguisticsResearch Papers)Isaac Caswell, Ciprian Chelba, and David Grangier. 2019. Tagged back-translation. In Proceedings of the Fourth Conference on Machine Translation (Vol- ume 1: Research Papers), pages 53-63, Florence, Italy. Association for Computational Linguistics.
Unsupervised cross-lingual representation learning at scale. Alexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettlemoyer, Veselin Stoyanov, 10.18653/v1/2020.acl-main.747Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. the 58th Annual Meeting of the Association for Computational LinguisticsAlexis Conneau, Kartikay Khandelwal, Naman Goyal, Vishrav Chaudhary, Guillaume Wenzek, Francisco Guzmán, Edouard Grave, Myle Ott, Luke Zettle- moyer, and Veselin Stoyanov. 2020. Unsupervised cross-lingual representation learning at scale. In Proceedings of the 58th Annual Meeting of the Asso- ciation for Computational Linguistics, pages 8440- 8451, Online. Association for Computational Lin- guistics.
BERT: Pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/N19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesMinneapolis, MinnesotaLong and Short Papers1Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: Pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long and Short Papers), pages 4171-4186, Minneapolis, Minnesota. Associ- ation for Computational Linguistics.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, 10.3115/1073083.1073135Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. the 40th Annual Meeting of the Association for Computational LinguisticsPhiladelphia, Pennsylvania, USAAssociation for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic eval- uation of machine translation. In Proceedings of the 40th Annual Meeting of the Association for Com- putational Linguistics, pages 311-318, Philadelphia, Pennsylvania, USA. Association for Computational Linguistics.
chrF: character n-gram F-score for automatic MT evaluation. Maja Popović, 10.18653/v1/W15-3049Proceedings of the Tenth Workshop on Statistical Machine Translation. the Tenth Workshop on Statistical Machine TranslationLisbon, PortugalAssociation for Computational LinguisticsMaja Popović. 2015. chrF: character n-gram F-score for automatic MT evaluation. In Proceedings of the Tenth Workshop on Statistical Machine Translation, pages 392-395, Lisbon, Portugal. Association for Computational Linguistics.
A call for clarity in reporting BLEU scores. Matt Post, 10.18653/v1/W18-6319Proceedings of the Third Conference on Machine Translation: Research Papers. the Third Conference on Machine Translation: Research PapersBrussels, BelgiumAssociation for Computational LinguisticsMatt Post. 2018. A call for clarity in reporting BLEU scores. In Proceedings of the Third Conference on Machine Translation: Research Papers, pages 186- 191, Brussels, Belgium. Association for Computa- tional Linguistics.
Improving neural machine translation models with monolingual data. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1009Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016a. Improving neural machine translation mod- els with monolingual data. In Proceedings of the 54th Annual Meeting of the Association for Compu- tational Linguistics (Volume 1: Long Papers), pages 86-96, Berlin, Germany. Association for Computa- tional Linguistics.
Neural machine translation of rare words with subword units. Rico Sennrich, Barry Haddow, Alexandra Birch, 10.18653/v1/P16-1162Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics. the 54th Annual Meeting of the Association for Computational LinguisticsBerlin, GermanyLong Papers1Association for Computational LinguisticsRico Sennrich, Barry Haddow, and Alexandra Birch. 2016b. Neural machine translation of rare words with subword units. In Proceedings of the 54th An- nual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 1715- 1725, Berlin, Germany. Association for Computa- tional Linguistics.
Parallel data, tools and interfaces in OPUS. Jörg Tiedemann, Proceedings of the Eighth International Conference on Language Resources and Evaluation (LREC'12). the Eighth International Conference on Language Resources and Evaluation (LREC'12)Istanbul, TurkeyEuropean Language Resources Association (ELRAJörg Tiedemann. 2012. Parallel data, tools and inter- faces in OPUS. In Proceedings of the Eighth In- ternational Conference on Language Resources and Evaluation (LREC'12), pages 2214-2218, Istanbul, Turkey. European Language Resources Association (ELRA).
Attention is all you need. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Lukasz Kaiser, Illia Polosukhin, Advances in Neural Information Processing Systems 30: Annual Conference on Neural Information Processing Systems. Long Beach, CA, USAAshish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. In Advances in Neural Information Pro- cessing Systems 30: Annual Conference on Neural Information Processing Systems 2017, December 4- 9, 2017, Long Beach, CA, USA, pages 5998-6008.
Bertscore: Evaluating text generation with BERT. Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q Weinberger, Yoav Artzi, 8th International Conference on Learning Representations. Addis Ababa, Ethiopia2020OpenReview.netTianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, and Yoav Artzi. 2020. Bertscore: Eval- uating text generation with BERT. In 8th Inter- national Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia, April 26-30, 2020. OpenReview.net. |
256,461,034 | CMCC: A Comprehensive and Large-Scale Human-Human Dataset for Dialogue Systems | Dialogue modeling problems severely limit the real-world deployment of neural conversational models and building a human-like dialogue agent is an extremely challenging task. Recently, data-driven models become more and more prevalent which need a huge amount of conversation data. In this paper, we release around 100,000 dialogue, which come from real-world dialogue transcripts between real users and customer-service staffs. We call this dataset as CMCC (China Mobile Customer Care) dataset, which differs from existing dialogue datasets in both size and nature significantly. The dataset reflects several characteristics of human-human conversations, e.g., task-driven, care-oriented, and long-term dependency among the context. It also covers various dialogue types including task-oriented, chitchat and conversational recommendation in real-world scenarios. To our knowledge, CMCC is the largest real human-human spoken dialogue dataset and has dozens of times the data scale of others, which shall significantly promote the training and evaluation of dialogue modeling methods. The results of extensive experiments indicate that CMCC is challenging and needs further effort. We hope that this resource will allow for more effective models across various dialogue sub-problems to be built in the future. | [
208248357,
224706438,
195069365,
235294326,
235313818
] | CMCC: A Comprehensive and Large-Scale Human-Human Dataset for Dialogue Systems
December 7, 2022
Yi Huang
Mobile Research
JIUTIAN Team
China
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
Xiaoting Wu
Mobile Research
JIUTIAN Team
China
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
Si Chen
Mobile Research
JIUTIAN Team
China
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
Wei Hu
Qing Zhu
Mobile Research
JIUTIAN Team
China
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
Junlan Feng
Mobile Research
JIUTIAN Team
China
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
Chao Deng dengchao@chinamobile.com3ozj@tsinghua.edu.cn
Mobile Research
JIUTIAN Team
China
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
Zhijian Ou
Mobile Research
JIUTIAN Team
China
Speech Processing and Machine Intelligence (SPMI) Lab
Tsinghua University
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
Jiangjiang Zhao 2zhaojiangjiang@cmos.chinamobile.com
China Mobile Online Marketing and Services Center
Tsinghua University-China Mobile Communications Group Co., Ltd. Joint Institute
CMCC: A Comprehensive and Large-Scale Human-Human Dataset for Dialogue Systems
Proceedings of the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)
the Towards Semi-Supervised and Reinforced Task-Oriented Dialog Systems (SereTOD)December 7, 2022
Dialogue modeling problems severely limit the real-world deployment of neural conversational models and building a human-like dialogue agent is an extremely challenging task. Recently, data-driven models become more and more prevalent which need a huge amount of conversation data. In this paper, we release around 100,000 dialogue, which come from real-world dialogue transcripts between real users and customer-service staffs. We call this dataset as CMCC (China Mobile Customer Care) dataset, which differs from existing dialogue datasets in both size and nature significantly. The dataset reflects several characteristics of human-human conversations, e.g., task-driven, care-oriented, and long-term dependency among the context. It also covers various dialogue types including task-oriented, chitchat and conversational recommendation in real-world scenarios. To our knowledge, CMCC is the largest real human-human spoken dialogue dataset and has dozens of times the data scale of others, which shall significantly promote the training and evaluation of dialogue modeling methods. The results of extensive experiments indicate that CMCC is challenging and needs further effort. We hope that this resource will allow for more effective models across various dialogue sub-problems to be built in the future.
Introduction
Task-oriented dialogue systems (Young et al., 2013;Williams et al., 2017;Su et al., 2021;He et al., 2021;Jayanthi et al., 2021) are designed to assist user in completing daily tasks, which involve reasoning over multiple dialogue turns. Tremendous progress has been made recently, but building a human-like dialogue system is a challenging task remaining. To drive the progress of building dialogue systems using data-driven approaches, a number of conversational corpora have been released in the past. Task-oriented dialogue corpus, such as Frames (Asri et al., 2017), MultiWOZ (Budzianowski et al., 2018), CrossWOZ (Zhu et al., 2020), RiSAWOZ (Quan et al., 2020), are collected by two crowd workers playing the roles of the user and the system, which often leads to be smallscale, and can not sufficiently capture a number of challenges that arise with production scaling. More recently, some researchers construct dialogue datasets from real human-to-human scenario conversations, especially human-to-human customer service scenario, such as JDDC (Chen et al., 2020) and MobileCS (Ou et al., 2022). JDDC is collected from E-commerce scenario and annotates intent information. MobileCS is conducted from mobile customer service scenario and model the process as task-oriented conversations. Therefore, the entity information related to tasks is annotated. However, the complexity of the dialogue process is far more than TOD, in addition to task completion, it is also accompanied by emotional support that appease an angry customer and providing solutions.
Several emotional support conversation corpora (Welivita and Pu, 2020;Sharma et al., 2020;Rashkin et al., 2019;Sun et al., 2021) are designed to emotional chat or provide empathetic responding. Since the emotional supporters are not welltrained, existing datasets do not naturally exhibit examples or elements of supportive conversations. As a result, data-driven models which leverage such corpora are limited in their ability to explicitly learn how to provide effective support. ESConv is collected by communication of trained individuals who play the roles of the seeker and the supporter, and guided by predefined emotional support conversation framework, however, it is more focused on alleviating the negative emotions that users encounter in their daily lives.
Despite the efforts in modeling emotional support, work that focuses specifically on modeling emotional care and support in task-oriented dia-48 logue system is relatively limited. To this end, we design a customer service care-oriented taxonomy, and annotate care-oriented information for Mo-bileCS dataset, covering 9 types of emotion labels and 17 types of customer service act labels finally. This new dataset consists of two parts, 8975 dialogues which are labeled with annotations of careoriented information and other more than 90,000 unlabeled dialogues. We call this new dataset as CMCC (China Mobile Customer Care) dataset. To be able to explain the patterns and trends of the conversation flow, we employ visualization methods to illustrate the most frequent exchanges and reveal how they temporally vary as dialogues proceed. Finally, we explore and demonstrate the effectiveness of care-oriented information for dialogue sub-tasks.
We highlight our contributions as follows:
• We provide a customer service care-oriented taxonomy, and conduct CMCC dataset on top of MoibleCS to facilitate the dialogue research.
• We employ visualization methods to illustrate the most frequent exchanges and reveal how patterns and trends temporally vary as dialogues proceed.
• We report the benchmark models and results of two evaluation tasks on CMCC, indicating that the dataset is a challenging testbed for future work.
2 Data Annotation
Motivation
We collect the CMCC dataset from the usercustomer service conversations in real-life scenarios. These dialogues are inherently rich in user and customer service acts and emotional information. Therefore, our data annotation process integrates such features in the data and concentrates on how the customer service provides caring and empathetic acts according to a dynamic in the user's emotions. We present a novel data annotation approach by adding "User Emotion", "Expanded Customer Service Caring Act", and "Satisfaction" labels to emphasize the importance of emotions and "careoriented" in the conversations. To our best knowledge, limited datasets have demonstrated such features in previous studies.
Guideline for Annotations
Our dataset is developed in multiple ways, which are provided in detail throughout the following sections. Compared to the MobileCS dataset, three new dimensions are added in our data annotation: user emotions, expanded customer service caring acts, and satisfaction. We also redefine the user intents to clarify the differences between intents and emotions.
User Emotion
We notice that users express various emotions throughout the conversations with customer service representatives, which can have a large impact on data division and annotation. Limited studies were conducted to consider this factor. As a result, we capture subtle user emotions throughout the conversations to derive and divide them into 8 labels for annotations. The refined annotation is necessary because customer service can act accordingly with "care-oriented" methods. We develop the "User Intent" labels from the MobileCS dataset, and add "Propose suggestion" and "Propose criticism" labels to separate intents from emotions. We pre-define an annotation schema and an intent set consisting of the 8 user emotion labels. At each turn, if emotions are explicitly expressed, the user's utterances are allowed to be annotated with one or more labels, which is common since multiple emotions could be expressed in one sentence in real-life conversations. The annotators are instructed to determine if the user's utterances contain emotions according to the schema and common sense. For example, "上次打电话说好了好了好了谁给我 开的我要投诉他" (That's fine on the last phone call. Who opened the business for me last time, I want to complain to him), the label for this sentence is "Emotionally More Agitated". "这样哦要像每 个人这样扣的话,还得了" (Would it be worth it if everyone's package was deducted like this?) is labeled with "Complain About A Problem".
Expanded Customer Service Caring Act
It's essential that good customer service provides "care-oriented" responses for emotional support. Adopting the original customer service acts from the MobileCS dataset, we derive and pre-define an "Expanded Customer Service Caring Act" set from the conversations. At each turn, the annotators are instructed to determine if the customer service utterances contain caring and empathetic acts to respond to user emotions and intents, al-lowing the use of multiple labels in one sentence. In addition, we extract keywords in each customer service utterance, such as "放心" (relax), "理解" (understand), and "别着急" (don't worry), etc., indicating different customer service caring acts. For example, "还有剩下的是基本费用请您放 心好吧" (The rest is the basic fee, please rest assured.) is labeled as "comfort". "确实是您的心情 我非常理解" (I really understand how you feel) is labeled as "empathy".
Satisfaction
The satisfaction labels are pre-defined based on the context of conversations. Each conversation is required to be annotated with one of the three labels. "3" indicates the user is satisfied; "2" indicates the user accepts the suggestion provided by the customer service representative while the problem is unsolved; "1" indicates the user is unsatisfied. The annotators are instructed to comprehend the context of the conversation and annotate each conversation with one of the three satisfaction labels. For example, customer service: "请问还有其他可以帮到 您吗?" (Is there anything else I can help you with ?) user: "没有啦谢谢" (No thanks) is labeled as "3", suggesting that the user is very satisfied with the solution and result that the customer service provided.
Annotation Results
We improve the MobileCS dataset and further develop it by incorporating user emotions, expanded customer service caring acts, and satisfaction in the dialogues. Our novel dataset not only is motivated by the inherent nature of customer service-user dialogues but also aims to emphasize a "care-oriented" focus. Also, the experiment results support that the CMCC dataset is advancing and valuable in usercustomer service conversations. The label set consists of 4 expanded customer service caring acts, 13 original customer service acts, 9 user emotions, 14 user intents, and 3 satisfaction labels in total.
Quality Control
Since the annotations are conducted on several dimensions simultaneously and differently on multiple criteria, missing and incorrect labels are inevitable problems we might face. To ensure a highquality annotation result, we review and revise the missing or incorrect annotations based on several effective strategies. First, we conduct keyword extractions to check for the missing and incorrect la-bels, which are manually filtered out and re-labeled by the qualified annotators. For example, "您稍 等一下好吗,我这边的话肯定会站在你的角度 去想" (Can you wait a moment, I will definitely think from your point of view) misses the "empathy" label during the first round of annotation, and it's added during the manual check. Based on this strategy, we review and re-label the dataset two more times, which guarantees the efficiency and completeness of our annotation. Additionally, for the satisfaction annotation, we randomly sample 10% of conversations to check for the annotation quality. For example, "唉算了算了反正还有几天 就" (Oh, forget it, there are still a few days left) is labeled as "3" in the first round of annotation, but it should be "2" instead.
Upon review, the missing labels and incorrect labels from the dataset are all revised and corrected for the quality control process. As a result, this ensures the high quality of our data annotation process.
Data Characteristics
This section mainly introduces the characteristics of the data. In addition to showing the number of conversations and labels in the dataset, we also demonstrate the characteristics of customer service dialogue data by visualizing the transition between customer service acts and user emotion in dialogues.
Data Statistics
The basic information of the labeled part in this dataset is shown in Table 1. The labeled data contains a total of 8,975 dialogues. The maximum Figure 1: The histogram of dialogue turns. The horizontal axis is the number of dialogue turns, and the vertical axis is the number of dialogues, filtering the dialogues with less than 10 dialogues. number of dialogue turns included in the dataset is 16. Figure 1 is a histogram of dialogue turns. It can be observed that most of the dialogue turns in the dataset are concentrated between 8 and 13. This means that the dialogue between the user and the customer service typically ends in around 10 turns.
If there are situations such as user's problems that are difficult to solve, the number of turns in this dialogue will increase significantly.
The histogram of user negative emotion labels is shown in Figure 2. The statistical scope is all negative emotions of users in the dialogue, excluding neutral emotions. The largest proportion of the entire user emotion label is "Complain About A Problem". This label is about the user emotion that often appears on the user side in the field of customer service dialogue. It generally occurs when users complain about networks, fees, business use, business handling, and e-commerce after-sales. The second-largest user emotion label is "Emotionally More Agitated". This label indicates that various businesses or services have seriously affected the user experience, or that customer service has not effectively helped users to solve problems. Figure 3 is a statistical histogram of customer service intent labels. It can be seen that the labels with the largest proportion of intent are "Inform" and "Passive Confirmation". "Inform" means that the customer service informs the user of certain information, usually definite information, such as the customer service will perform a certain operation, the problem will be solved within a certain period of time, etc. "Passive Confirmation" means the act of confirming based on the user's inquiry or information provided above. Since the common content of dialogues in the field of customer service is to solve the user's problem, the labels of "Inform" and "Passive Confirmation" will be ubiquitous in each turn of dialogue.
Data Structure
For a better understanding of the data structure, we investigate which customer service acts are frequently associated with users when responding to different emotional situations. We list the labeled instances of customer service act, user emotion, examples and the proportion of all labels, respectively (detailed in the appendix). Most conversations have multiple intent labels or emotion labels. For example, "Hello, nice to serve you, sorry to keep you waiting" includes "Apology" and "Greeting". Based on the statistics of user emotions and customer service acts, we observe the overall distribution of labels on the dataset.
In the following part, we will explore more about the conversion relationship between user emotions and customer service acts in the process of a dialogue. Figure 4 is a chord diagram of emotion-act labels. It represents the dialogue relationship between the user's emotion and the customer service act in the dialogue.The nodes and edges of the same color in the graph represent the user emotion and the customer service act corresponding to the next round of dialogue. It can be seen from the figure that the largest act dialogue is from "Complain About A Problem" to "Inform". This shows that when the user encounters a business problem, the customer service is more inclined to explain the cause or solution to the problem to the user. This phenomenon is in line with the most common scenario in the field of customer service, that is, customer service helps users solve related problems.
In order to intuitively observe the conversion relationship between user emotion and customer service act in multiple turns of dialogue, we draw a Sankey diagram of the dialogue between user emotion and customer service act in multi-turns. Figure 5 is the dialogue flow diagram of user emotions and customer service acts in four turns of dialogues. The first and third turns are user emotions, and the second and fourth turns are customer service acts. After the second turn of customer service replies to the user's dialogue with negative emotions, it can be observed that the user's emotion in the next turn, which is also the third turn, has become more "Neutral". This shows that as the customer service responds to the user's questions, the user's negative emotions will gradually disappear.
Experiments
In this section, we conduct experiments on the CMCC dataset. We focus on two tasks: dialogue response generation and user emotion recognition.
Dialogue Response Generation
Our experiments in this part mainly focus on the question: Can extra care-oriented information improve the generative dialogue model?
Comparable Models
Similar to (Ou et al., 2022), we employ a Markovian generative architecture (MGA) (Liu et al., 2022) based on Chinese GPT-2 as baseline and build the following variant model: Baseline The baseline model is a MGA generative model, which is designed to be p θ (e t , ui t , a t , r t |e t−1 , u t ). u t denotes the user utterance, e t is entity names of dialogue history, ui t is the user intent, and r t is the customer service response, respectively, at turn t = 1, ..., T , for a diaogue of T turns.
Variants with care-oriented information To incorporate the care-oriented annotations into the baseline model, we add user emotion generation and expand original customer service acts to with caring acts in it. As is shown in Figure 6, for each customer service response, we append user emotion before corresponding customer service act. Then MGA generative process can ber represented as p θ (e t , ui t , uemo t , a t , r t |e t−1 , u t ) , where uemo t is the user emotion at turn t. The model generates the response conditioned on the predicted user emotion and customer service act.
We study two variants that use care-oriented annotations in the experiments. (1) End2End: customer service response is generated conditioned on predicted customer service act and predicted user emotion, user emotion and customer service act are generated conditioned on KB result, KB result is queryed conditioned on predicted entity name and user intent. (2) Oracle: customer service response is generated conditioned on gold reference of customer service act, entity name, user intent and KB result.
Evaluation Measures
To investigate the impact of utilizing care-oriented information on the model performance with Chinese GPT-2 as backbone, we compare the performance of End2End and Oracle variants with the Baseline model. The automatic metrics include F1 score, Success rate and BLEU score. F1 is calculated for both predicted user intent and customer service act. Success rate (Budzianowski et al., 2018) is the percentage of generated dialogues that achieve user goals. BLEU-4 score (Papineni et al., 2002) evaluates the fluency of generated responses.
Experimental Results
The experimental results are shown in Table 2, which demonstrates the effectiveness of our model. There are 3 major findings from the experiments.
(1) The Variant model has improved the Baseline model's performance of user intent F1, success rate and BLEU-4 of response, but the F1 of the customer service act has decreased slightly. It may be because the variant model expands the original cutomer service act labels, those with less data affects the overall performance. (2) Whether it is End2End or Oracle results, variant model is better than baseline model in BLEU-4 of response, we attribute it to the fact that care-oriented information matters and it enhances the dialogue generation positively. Care-oriented information includes user emotion and expanded customer service caring act, which part brings more gain will be analyzed in ablation experiments. (3)
Analysis
Our variant models consider care-oriented information, user emotion and customer service caring act.
To investigate more, we conduct extra experiments and the analysis results. In order to verify the improvement brought by each added part (user emotion, expanded customer service caring act), we drop these two parts from the original variant model and check the performance changes. Results are presented in Table 3. We have the following observations: (1) In most circumstances, when user emotion is removed, BLEU-4 dropped more and success rate dropped less. (2) When expanded customer service caring act is removed, situation differs. That is, BLEU-4 dropped less and success rate dropped more. It indicates that expanded customer service caring act provides more gain for the entity-related part of the response, while user emotion plays more for the non-entityrelated part (e.g., caring or empathetic responding). In Table 4, examples are presented to compare the response generated by variant model and the baseline model. The first column is user utterance, the second column is the response of manual customer service, the third and fourth columns are the responses generated by variant model and baseline model respectively. In the first example, user reports that the broadband network is not working well, and accompanied by complaints. The variant model can generate the response with the soothing keyword "马上" (right now). In the second example, user's emotion is neutral and the variant model is still able to generate a more friendly response with "请您放心" (please do not worry) keyword. Intuitively, the variant model which is introduced with care-oriented information achieves better per-formance than the baseline model.
User Emotion Recognition
In this part, we focus on the effect of different models used in the emotion recognition task: the classification-based model and generation-based one. We will conduct experiments on the dataset CMCC and answer the question: are both models suitable to solve the emotion recognition problem?
Classification-based Model
We first treat the emotion recognition task as a mutli-label classification problem because a user utterance may contain multiple emotions, e.g., complain about a problem and dissatisfied with buisiness rules at the same time. Taking the pre-trained models bert-base-chinese 1 as the backbone, the classification model takes dialogue utterances X as input and predicts a binary result for each possible dialogue emotion:
P = Sigmoid(W (G(X))) ∈ R N ,(1)
where W ∈ R d b * N is a trainable weight matrix, d b is the hidden size and G is used to encode the input utterance to our representation. The model is trained with binary cross-entropy loss. The task uses the threshold of 0.3 to predict whether the emotion is triggered. We evaluate micro-F1 and macro-F1 scores on our dataset for the emotion recognition task.
Generation-based Model
The Generation-based model is consistent with the variant model in the previous section 4.1.1. The user emotion is generated conditioned on KB result which is queryed conditioned on predicted entity name and user intent. Micro-F1 and macro-F1 are aggregation methods for the user emotion recognition task. Specifically, micro-F1 score gives equal importance to each observation. When the classes are imbalanced, those classes with more observations will have a larger impact on the micro-F1 score. Thus the final micro-F1 score tends to hide the performance of the minority classes and amplify the majority. On the other hand, macro-F1 score gives equal importance to each class. This means that a majority class will contribute equally along with the minority, allowing macro-F1 to still return objective results on imbalanced datasets. As shown in Table 5, our experiments show that the generation_based approach can help us improve emotion classification performance on the imbalanced classes, from a classification_based baseline performance of 30.1% macro-F1 to 39.3%, an increase of 9.2 points.
Models micro-F1 macro-F1 Generation-based 0.832 0.393 Classification-based 0.859 0.301 Table 5: Emotion recognition performance using two different models (the generation-based model and the classification-based one).
Conclusion
In this paper, we present CMCC, to date the largest human-to-human real-life dataset annotated with rich care-oriented information on top of MobileCS.
We not only manually label each dialogue with comprehensive user emotion, customer service act and satisfaction annotations for various sub-tasks of multi-domain dialogue systems, but also further investigate approach to facilitate the research of care-oriented way via empirical experiments. In addition, the process of data annotation and visualization is described in detail. We also report the benchmark models and results of two evaluation tasks on CMCC, indicating that the dataset is a challenging testbed for future work. We will enrich the dataset annotations (e.g., solutions, external knowledge and API calls) from various aspects in future work. We hope it can bring more imagination and benefit future research in dialogue systems. Table 9: The joint performance on the stack-propagation model (Qin et al., 2019) using the CMCC dataset with or without emotion labeling. Table 9 gives the result of the experiment comparison for entity extraction task. From results of the first two rows, we observe that without the emotion labels, simply incorporating the sequence labeling information, the entity extraction performance (micro-F1) drops slightly, which demonstrates that directly leveraging the emotion information can slightly improve the performance of the entity extraction task.
Figure 2 :
2The histogram of user negative emotion. The horizontal axis is user emotion labels, and the vertical axis is the number of emotions.
Figure 3 :
3The histogram of customer service act. The horizontal axis is the customer service act label, and the vertical axis is the number of acts.
Figure 4 :
4The chord diagram for user emotion and customer service act relationship. More details on labels can be found in the appendix. Best viewed in color.
Figure 5 :
5Dynamic transformation of user emotion vs. customer service act in the first four rounds of dialogue. Best viewed in color.
Figure 6 :
6Variant model architecture with care-oriented information.
Table 1: Dialogue statistics in the dataset.Criteria
Statistics
Total no. of dialogues
8,975
Total no. of dialogue
turns
100,139
Average no. of turns
per dialogue
22.31
Maximum no. of turns
per dialogue
16 (353 dialogues)
Minimum no. of turns
per dialogue
5 (1 dialogue)
Total no. of customer
service turns
100,139
Total no. of user turns
100,138
Average no. of customer
service tokens per dialogue
turn
25.27
Average no. of user tokens
per dialogue turn
14.58
End2End results are lower than Oracle's results, because if predicted intermediate results is different from the ground truth, the generated response will be much different from the reference response.Models
F1 for
user intent
Success
rate
F1 for
customer
service act
BLEU-4
Baseline Model
(End2End)
0.642
0.315
0.575
4.137
Variant Model
(End2End)
0.656
0.357
0.567
4.669
Baseline Model
(Oracle)
-
-
-
6.230
Variant Model
(Oracle)
-
-
-
7.385
Table 2 :
2Results of automatic evaluation. The results in bold are better than the baseline.
Table 3 :
3Evaluation results of ablation study.
Table 4 :
4Responses generated from variant model and baseline model.
Table 6 :
6Types, instances, and proportions of customer service acts.It works between 8:00 am and 6:00 pm every day. It means that it is already off work now and can only be called after 8:00 am tomorrow.57
Table 7 :
7Types, instances, and proportions of customer service acts.就是在那个上网的时间网络老是出现那个网络 异常怎么回事儿It's the time when the Internet is online, the network always has that network abnormality,what's the matter?No, _ called and he told me that I would do it, and even a 30G package . why are you full of crap? 尽快呀,快到什么时候啊_对呀,我想问下快到什么 ,四五天了耶,然后重点是我报修也报了三天之后 也没个人给我打个电话啊 As soon as possible, it's been four or fiv e days, and the point is that I appl ied for repairs for three days and no one called me.六十多岁了我能不着急吗我这个I'm in my sixties, can I be in a hurry?不可能吧哪有这种霸王条款我不想用我_我取掉的话 它为啥不让取 Impossible, how can there be such an overlord clause, I don't want to use it, I'll jus t cancel it, why not let me cancel可以我希望你们后台人员无论处理出 怎样怎么样的结果_可以在最短的时间内告知我Yes, I hope that no matter what the result is from your backstage staff, you can let me know in the shortest possible time.58
Table 8 :
8Types, instances, and proportions of user negative emotions.Figure 7: User emotion-customer service act conversion relationship chord diagram59
https://huggingface.co/bert-base-chinese
Frames: a corpus for 55 adding memory to goal-oriented dialogue systems. Layla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, Kaheer Suleman, arXiv:1704.00057arXiv preprintLayla El Asri, Hannes Schulz, Shikhar Sharma, Jeremie Zumer, Justin Harris, Emery Fine, Rahul Mehrotra, and Kaheer Suleman. 2017. Frames: a corpus for 55 adding memory to goal-oriented dialogue systems. arXiv preprint arXiv:1704.00057.
Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. Paweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Milica Osman Ramadan, Gašić, arXiv:1810.00278arXiv preprintPaweł Budzianowski, Tsung-Hsien Wen, Bo-Hsiang Tseng, Inigo Casanueva, Stefan Ultes, Osman Ra- madan, and Milica Gašić. 2018. Multiwoz-a large-scale multi-domain wizard-of-oz dataset for task-oriented dialogue modelling. arXiv preprint arXiv:1810.00278.
The JDDC corpus: A large-scale multi-turn chinese dialogue dataset for e-commerce customer service. Meng Chen, Ruixue Liu, Lei Shen, Shaozu Yuan, Jingyan Zhou, Youzheng Wu, Xiaodong He, Bowen Zhou, Proceedings of The 12th Language Resources and Evaluation Conference. The 12th Language Resources and Evaluation ConferenceMarseille, France2020European Language Resources AssociationMeng Chen, Ruixue Liu, Lei Shen, Shaozu Yuan, Jingyan Zhou, Youzheng Wu, Xiaodong He, and Bowen Zhou. 2020. The JDDC corpus: A large-scale multi-turn chinese dialogue dataset for e-commerce customer service. In Proceedings of The 12th Lan- guage Resources and Evaluation Conference, LREC 2020, Marseille, France, May 11-16, 2020, pages 459-466. European Language Resources Associa- tion.
Galaxy: A generative pre-trained model for task-oriented dialog with semi-supervised learning and explicit policy injection. Wanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, arXiv:2111.14592arXiv preprintWanwei He, Yinpei Dai, Yinhe Zheng, Yuchuan Wu, Zheng Cao, Dermot Liu, Peng Jiang, Min Yang, Fei Huang, Luo Si, et al. 2021. Galaxy: A gener- ative pre-trained model for task-oriented dialog with semi-supervised learning and explicit policy injec- tion. arXiv preprint arXiv:2111.14592.
Evaluating pretrained transformer models for entity linking in task-oriented dialog. Varsha Sai Muralidhar Jayanthi, Karthik Embar, Raghunathan, arXiv:2112.08327arXiv preprintSai Muralidhar Jayanthi, Varsha Embar, and Karthik Raghunathan. 2021. Evaluating pretrained trans- former models for entity linking in task-oriented dia- log. arXiv preprint arXiv:2112.08327.
Revisiting markovian generative architectures for efficient task-oriented dialog systems. Hong Liu, Yucheng Cai, Zhijian Ou, Yi Huang, Junlan Feng, arXiv:2204.06452arXiv preprintHong Liu, Yucheng Cai, Zhijian Ou, Yi Huang, and Junlan Feng. 2022. Revisiting markovian genera- tive architectures for efficient task-oriented dialog systems. arXiv preprint arXiv:2204.06452.
Towards emotional support dialog systems. Siyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, Minlie Huang, 10.18653/v1/2021.acl-long.269Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021Association for Computational Linguistics1Virtual EventSiyang Liu, Chujie Zheng, Orianna Demasi, Sahand Sabour, Yu Li, Zhou Yu, Yong Jiang, and Minlie Huang. 2021. Towards emotional support dialog systems. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, ACL/IJCNLP 2021, (Volume 1: Long Papers), Virtual Event, August 1-6, 2021, pages 3469-3483. Association for Computational Linguistics.
2022. A challenge on semi-supervised and reinforced task-oriented dialog systems. Zhijian Ou, Junlan Feng, Juanzi Li, Yakun Li, Hong Liu, Hao Peng, Yi Huang, Jiangjiang Zhao, arXiv:2207.02657arXiv preprintZhijian Ou, Junlan Feng, Juanzi Li, Yakun Li, Hong Liu, Hao Peng, Yi Huang, and Jiangjiang Zhao. 2022. A challenge on semi-supervised and rein- forced task-oriented dialog systems. arXiv preprint arXiv:2207.02657.
Bleu: a method for automatic evaluation of machine translation. Kishore Papineni, Salim Roukos, Todd Ward, Wei-Jing Zhu, Proceedings of the 40th annual meeting of the Association for Computational Linguistics. the 40th annual meeting of the Association for Computational LinguisticsKishore Papineni, Salim Roukos, Todd Ward, and Wei- Jing Zhu. 2002. Bleu: a method for automatic evalu- ation of machine translation. In Proceedings of the 40th annual meeting of the Association for Computa- tional Linguistics, pages 311-318.
A stack-propagation framework with token-level intent detection for spoken language understanding. Libo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, Ting Liu, arXiv:1909.02188arXiv preprintLibo Qin, Wanxiang Che, Yangming Li, Haoyang Wen, and Ting Liu. 2019. A stack-propagation framework with token-level intent detection for spoken language understanding. arXiv preprint arXiv:1909.02188.
Risawoz: A large-scale multi-domain wizard-of-oz dataset with rich semantic annotations for task-oriented dialogue modeling. Jun Quan, Shian Zhang, Qian Cao, Zizhong Li, Deyi Xiong, 10.18653/v1/2020.emnlp-main.67Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnlineAssociation for Computational Linguistics2020Jun Quan, Shian Zhang, Qian Cao, Zizhong Li, and Deyi Xiong. 2020. Risawoz: A large-scale multi-domain wizard-of-oz dataset with rich semantic annotations for task-oriented dialogue modeling. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 930-940. Association for Computational Linguistics.
Towards empathetic opendomain conversation models: A new benchmark and dataset. Eric Michael Hannah Rashkin, Margaret Smith, Y-Lan Li, Boureau, 10.18653/v1/p19-1534Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019. the 57th Conference of the Association for Computational Linguistics, ACL 2019Florence, ItalyAssociation for Computational Linguistics1Hannah Rashkin, Eric Michael Smith, Margaret Li, and Y-Lan Boureau. 2019. Towards empathetic open- domain conversation models: A new benchmark and dataset. In Proceedings of the 57th Conference of the Association for Computational Linguistics, ACL 2019, Florence, Italy, July 28-August 2, 2019, Vol- ume 1: Long Papers, pages 5370-5381. Association for Computational Linguistics.
A computational approach to understanding empathy expressed in text-based mental health support. Ashish Sharma, Adam S Miner, David C Atkins, Tim Althoff, 10.18653/v1/2020.emnlp-main.425Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing. the 2020 Conference on Empirical Methods in Natural Language ProcessingOnline2020Association for Computational LinguisticsAshish Sharma, Adam S. Miner, David C. Atkins, and Tim Althoff. 2020. A computational approach to un- derstanding empathy expressed in text-based mental health support. In Proceedings of the 2020 Confer- ence on Empirical Methods in Natural Language Processing, EMNLP 2020, Online, November 16-20, 2020, pages 5263-5276. Association for Computa- tional Linguistics.
Yixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, Yi Zhang, arXiv:2109.14739Multi-task pre-training for plug-and-play task-oriented dialogue system. arXiv preprintYixuan Su, Lei Shu, Elman Mansimov, Arshit Gupta, Deng Cai, Yi-An Lai, and Yi Zhang. 2021. Multi-task pre-training for plug-and-play task-oriented dialogue system. arXiv preprint arXiv:2109.14739.
Psyqa: A chinese dataset for generating long counseling text for mental health support. Hao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, Minlie Huang, 10.18653/v1/2021.findings-acl.130Findings of the Association for Computational Linguistics: ACL/IJCNLP 2021. Association for Computational LinguisticsACL/IJCNLP 2021 of Findings of ACLHao Sun, Zhenru Lin, Chujie Zheng, Siyang Liu, and Minlie Huang. 2021. Psyqa: A chinese dataset for generating long counseling text for mental health support. In Findings of the Association for Com- putational Linguistics: ACL/IJCNLP 2021, Online Event, August 1-6, 2021, volume ACL/IJCNLP 2021 of Findings of ACL, pages 1489-1503. Association for Computational Linguistics.
A taxonomy of empathetic response intents in human social conversations. Anuradha Welivita, Pearl Pu, 10.18653/v1/2020.coling-main.429Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, Spain (Online2020International Committee on Computational LinguisticsAnuradha Welivita and Pearl Pu. 2020. A taxonomy of empathetic response intents in human social con- versations. In Proceedings of the 28th International Conference on Computational Linguistics, COLING 2020, Barcelona, Spain (Online), December 8-13, 2020, pages 4886-4899. International Committee on Computational Linguistics.
Hybrid code networks: practical and efficient end-to-end dialog control with supervised and reinforcement learning. Jason D Williams, Kavosh Asadi, Geoffrey Zweig, 10.18653/v1/P17-1062Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics. the 55th Annual Meeting of the Association for Computational LinguisticsVancouver, CanadaAssociation for Computational Linguistics1Jason D. Williams, Kavosh Asadi, and Geoffrey Zweig. 2017. Hybrid code networks: practical and efficient end-to-end dialog control with supervised and rein- forcement learning. In Proceedings of the 55th An- nual Meeting of the Association for Computational Linguistics, ACL 2017, Vancouver, Canada, July 30 -August 4, Volume 1: Long Papers, pages 665-677. Association for Computational Linguistics.
Pomdp-based statistical spoken dialog systems: A review. Steve J Young, Milica Gasic, Blaise Thomson, Jason D Williams, 10.1109/JPROC.2012.2225812Proc. IEEE. IEEE101Steve J. Young, Milica Gasic, Blaise Thomson, and Jason D. Williams. 2013. Pomdp-based statistical spoken dialog systems: A review. Proc. IEEE, 101(5):1160-1179.
Crosswoz: A large-scale chinese cross-domain task-oriented dialogue dataset. Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, Minlie Huang, 10.1162/tacl_a_00314Trans. Assoc. Comput. Linguistics. 8Qi Zhu, Kaili Huang, Zheng Zhang, Xiaoyan Zhu, and Minlie Huang. 2020. Crosswoz: A large-scale chi- nese cross-domain task-oriented dialogue dataset. Trans. Assoc. Comput. Linguistics, 8:281-295. |
18,729,457 | ECNU at SemEval-2016 Task 3: Exploring Traditional Method and Deep Learning Method for Question Retrieval and Answer Ranking in Community Question Answering | This paper describes the system we submitted to the task 3 (Community Question Answering) in SemEval 2016, which contains three subtasks, i.e., Question-Comment Similarity (subtask A), Question-Question Similarity (subtask B), and Question-External Comment Similarity (subtask C). For subtask A, we employed three different methods to rank question-comment pair, i.e., supervised model using traditional features, Convolutional Neural Network and Long-Short Term Memory Network. For subtask B, we proposed two novel methods to improve semantic similarity estimation between question-question pair by integrating the rank information of questioncomment pair. For subtask C, we implemented a two-step strategy to select out the similar questions and filter the unrelated comments with respect to the original question. | [
1957433,
10402642
] | ECNU at SemEval-2016 Task 3: Exploring Traditional Method and Deep Learning Method for Question Retrieval and Answer Ranking in Community Question Answering
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 16-17, 2016. 2016
Guoshun Wu
Man Lan mlan@cs.ecnu.edu.cn*
Shanghai Key Laboratory of Multidimensional Information Processing
Department of Computer Science and Technology
East China Normal University
ShanghaiP.R.China
ECNU at SemEval-2016 Task 3: Exploring Traditional Method and Deep Learning Method for Question Retrieval and Answer Ranking in Community Question Answering
Proceedings of SemEval-2016
SemEval-2016San Diego, CaliforniaAssociation for Computational LinguisticsJune 16-17, 2016. 2016
This paper describes the system we submitted to the task 3 (Community Question Answering) in SemEval 2016, which contains three subtasks, i.e., Question-Comment Similarity (subtask A), Question-Question Similarity (subtask B), and Question-External Comment Similarity (subtask C). For subtask A, we employed three different methods to rank question-comment pair, i.e., supervised model using traditional features, Convolutional Neural Network and Long-Short Term Memory Network. For subtask B, we proposed two novel methods to improve semantic similarity estimation between question-question pair by integrating the rank information of questioncomment pair. For subtask C, we implemented a two-step strategy to select out the similar questions and filter the unrelated comments with respect to the original question.
Introduction
The purpose of Community Question Answering task in SemEval 2016 (Nakov et al., 2016) is to provide a platform for finding good answers to new questions in a community-created discussion forum, where the main task (subtask C) is defined as follows: given a new question and a large collection of question-comment threads created by a user community, participants are required to rank the comments that are most useful for answering the new question. Obviously, this main task consists of two optional subtasks, i.e., Question-Comment Similarity (subtask A, also known as answer ranking), which is to re-rank comments/answers according to their relevance with respect to the question, and Question-Question Similarity (i.e., subtask B, also known as question retrieval), which is to retrieve the similar questions according to their semantic similarity with respect to the original question.
To address subtask A, we explored a traditional machine learning method which uses multiple types of features, e.g., Word Match Features, Translationbased Features, and Lexical Semantic Similarity Features. Additionally, for subtask A, we also built a Convolutional Neural Network (CNN) model and a bidirectional Long Short-Term Memory (BLST-M) model to learn joint representation for questioncomment (Q-C) pair. For subtask B, besides IR method and traditional machine learning method, we also proposed two novel methods to improve semantic similarity estimation between question-question (Q-Q) pairs by integrating the rank information of Q-C pairs. Since subtask C can be regarded as a joint work of the two above-mentioned subtasks, we implemented a two-step strategy to first select out similar questions and then to filter out the unrelated comments with respect to the original question.
The rest of this paper is organized as follows. Section 2 describes our system. Section 3 describes experimental setting. Section 4 and 5 report results on training and test sets. Finally, Section 6 concludes this work.
System Description
For subtask A, we presented three different methods i.e., using traditional linguistic features, learning a CNN model and a bidirectional LSTM model to represent question and comment sentences. For sub-task B, besides traditional methods, we proposed two novel methods to improve semantic similarity estimation between Q-Q pairs by integrating the rank information of Q-C pairs. The first is to adopt general ranking evaluation metrics of Q 0 -C and Q 1 -C (i.e., Spearman, Pearson, and Kendall Coefficient) as additional ranking scores or features of Q 0 -Q 1 where Q 0 and Q 1 represent original question and its related question, respectively. The second is to extract features on Q 0 -C and Q 1 -C and to regard the cosine values which are calculated on these two feature vectors as additional features for Q 0 -Q 1 .
Features Engineering
All three subtasks can be regarded as an estimation task of sentence semantic measures which can be modeled by various types of features. In this work, we employed the following four types of features borrowed from previous work, i.e., Word Match Features, Translation Based Features, Topic Model Based Features, and Lexical Semantic Similarity Features. The details of these four types of features are described as follows. Note that the following four feature types are adopted in both Q-Q and Q-C pairs, here we took the Q-Q pair for example.
Word Matching Feature (WM): This feature records the proportions of co-occurred words between a given sentence pair. Given a Q-Q pair, this feature type is calculated using five measures:
|Q 0 ∩ Q 1 |, |Q 0 ∪ Q 1 |/|Q 0 |, |Q 0 ∩ Q 1 |/|Q 1 |, |Q 1 − Q 0 |/|Q 1 |, |Q 0 − Q 1 |/|Q 0 | ,
where |Q 0 | and |Q 1 | denote the number of the words of Q 0 and Q 1 .
Translation Based Feature (TB): The above WM feature only considers the overlapped words between Q 0 and Q 1 and thus it may fail to "bridge the lexical gap" between Q-Q pair. One possible solution is to regard this task as a statistic machine translation problem between question and answer by using the IBM Model 1 (Brown et al., 1993) to learn the word-to-word probabilities. Following (Xue et al., 2008;Surdeanu et al., 2011), we regarded P (Q 0 |Q 1 ), i.e., the translation probability of Q 1 when given Q 0 , as a translation based feature. The probabilities are calculated as:
P (Q0|Q1) = w∈Q 0 P (w|Q1) P (w|Q1) = (1 − λ)Ptr(w|Q1) + λP ml (w|C) P ml (w|Q1) = a∈Q 1 P (w|a)P ml (a|Q1)
where P (Q 0 |Q 1 ) is the probability that the Q 0 word w is generated from Q 1 , λ is a smoothing parameter, C is a background collection. P ml (w|C) is computed by maximum likelihood estimator. P (w|a) denotes the translation probability from Q 1 word a to Q 0 word w. The GIZA++ Toolkit 1 is used to compute these probabilities.
Topic Model Based Feature (TMB): We used the LDA (Blei et al., 2003) model to transform Q 0 and Q 1 into topic-based vectors and then took the cosine value of two topic vectors as feature. We use the GibbsLDA++ (Phan and Nguyen, 2007) Toolkit to train the topic model.
Lexical Semantic Similarity Feature (LSS): Inspired by (Yih et al., 2013), we included the lexical semantic similarity features in our model. We used three different word vectors to represent LSS feature, i.e., the 300-dimensional version of word2vec (Mikolov et al., 2013) vectors, 300-dimensional Glove vectors (Pennington et al., 2014) and 300dimensional vectors which are pre-trained with the unsupervised neural language model (Mikolov et al., 2013) on the Qatar Living data 2 . Words not present in the set of pre-trained words are initialized randomly. There are two ways to calculate the LSS features. One is to calculate the cosine similarity by summing up all word vectors in Q 0 and Q 1 . Another is to adopt averaged pairwise cosine similarity between each word in Q 0 and Q 1 .
Besides above four types of features, for Q-Q pair, we also extracted following two question information features (QI) to describe the informativeness of related question Q 1 : (1) the number of words in Q 1 (2) the position of Q 1 in all related questions. For Q-C pair, we also extracted following two comment information features (CI) to measure the informativeness of a comment text: (1) the number of words in comment (2) the number of nouns, verbs and adjectives in comment.
Two Methods to address subtask A
Method 1: CNN
We proposed a convolutional neural network to model question-comment sentence. As illustrated in Figure 1, we first input word embeddings (here we used 300-dimensional Glove vectors in (Pennington et al., 2014)) of question and comment words and then learn the meaning (i.e., feature vector) of question and comment through convolution and pooling operation. After a simple concatenation layer connecting question and comment vectors, we final obtain a relevant score through a softmax operation. Figure 2 shows a multiple-layer BLSTM network model we used for question and comment sentences modeling. The procedure of BLSTM is similar to that of CNN. The words of question and comment sentences are first converted into vectors by looking up publicly available 300-dimensional Glove vectors. Then they are sequentially read by BLSTM from both directions. In this way, the contextual information across words in both question and comment sentences is modeled by employing temporal recurrence in BLSTM. Like CNN, finally it outputs a relevant score between question and comment by a simple concatenation operation on these two output vectors and a softmax operation.
Method 2: BLSTM
Two Methods for subtask B
To calculate semantic similarity between Q 0 and Q 1 pair, previous work extracted features only from the Q-Q sentence pair. We stated that the comment set C and its rank with respect to Q 1 also provide useful information for question-question similarity. To address it, we propose two novel methods to improve semantic similarity estimation between Q-Q pair by integrating the rank information of Q-C pair.
Method 1: adopt Q-C Ranking
Evaluation Metrics as Similarity Score
The first method is to adopt rank evaluation metrics, i.e., Spearman, Pearson, and Kendall Ranking Coefficient directly as similarity scores for question similarity estimation.
Generally, these three nonparametric metrics are to measure the statistical dependence between two variables and to assess how similar between two variables. In comment ranking, they are used to measure how similar the two rankings Q 0 -C and Q 1 -C are. Based on our consideration, given one comment set C, if the two ranks of Q 0 -C and Q 1 -C are similar, the semantic similarity between Q 0 -Q 1 is high. These three ranking correlation coefficients (i.e., Spearman, Pearson, and Kendall Coefficient) can be used directly as question similarity scores or used as additional ranking scores in combination with other features (described in Section 2.1) extracted from Q 0 -Q 1 pair.
Method 2: add new features extracted from Q-C pair
We presented two methods to add new features extracted from Q-C pair. The first is to extract the features from Q 0 -C and Q 1 -C pair and then use the cosine scores calculated on the two feature vec-tors as additional features for Q 0 -Q 1 . We extracted traditional NLP features described in Section 2.1 from Q 0 -C and Q 1 -C pairs, respectively, denoted as two feature vectors, i.e., F 0 and F 1 . Then we calculated the cosine similarity on these two vectors respectively, and obtain two cosine scores, i.e., cos(Q 0 -C), and cos(Q 1 -C). After that, we calculated the absolute difference between these two cosine scores. Finally, the obtained scores (denoted as [cos 1 , cos 2 , ...]) are ranked as additional features.
The second is to calculate the ranking scores of Q 0 -C and Q 1 -C by using comment ranking model firstly, then use the Manhattan Distance of two lists of ranked scores as an additional feature.
A Two-Step Filtering for Subtask C
To overcome the error propagation from questionquestion similarity step to question-comment similarity step, we employed a two-step filtering strategy for subtask C. The first step is to choose the top N similar questions with the aid of the Q-C ranking. The second step is to re-rank the comment and choose the top M comments with integration of the previous Q-Q results. Table 1 shows the statistics of training, development and test data sets, where the # original, # related, and # answers represent the number of original questions, related questions and answers, respectively. The types of comments with respect to original question and related question fall into three classes: Good, P otentiallyU sef ul and Bad. The types of related question with respect to original question fall into three classes: P erf ectM atch, Relevant and Irrelevant.
Experimental Setting
Datasets
Preprocessing
We first removed stop words and punctuation, and changed all words to lowercase. After that, we performed tokenization and stemming using NLTK 3 Toolkit.
Evaluation Metrics
To evaluate performance of the tasks, the M ean Average P recision (MAP) is adopted as official evaluation measure by the organizers which the MAP is defined as the mean of the averaged precision scores for queries.
Learning Algorithm
We compared two ranking strategies in traditional method. One is to train a pairwise-based ranking model, i.e., Learning-to-rank (Trotman, 2005), and use the output of model as a ranking score directly. Another is to first train a supervised classification model and then use the confidence score of probability as a ranking score. To train a supervised classifier, two algorithms implemented in SKLearn 4 have been examined, i.e., Logistic Regression (LR) and Support Vector Machine (SVM). Finally, Logistic Regression classifier (penalized argument c = 1) is adopted for all three subtasks for its good performance in preliminary experiments.
Experiments on Training Data
Results on Subtask A
For the experiments of subtask A, the hyperparameters of CNN model are set as follows: the number of filter windows is 2, feature maps are set to 100, learning rate is set to 0.01. And the hyperparameters of BLSTM model are set as follows: memory size is set to 500 and the learning rate is 0.01. LSS features are obtained by integrating context of the word. Therefore, the LSS features show that this particular word embedding seems to complement the surface word matching information. Secondly, the combination of five types of features achieve the best performance for traditional method. Thirdly, the model based CNN and BLSTM achieve comparable performance with traditional method. Finally, the combination of three methods achieve the best performance which shows that CNN and BLSTM catch complementary information for Q-C pair with traditional method. Toolkit 5 with the original question as query with B-M25 (K 1 = 1.2 and B = 0.75). ARC and ARR are the first and second methods presented in Section 2.3.2. According to the results of Table 3, we can make following three observations:
Results on Subtask B
(1) Traditional NLP features significantly improve the performance of question-question similarity over Lucene baseline.
(2) The Pearson, Spearman, and Kendall get similar performance and do not perform well versus traditional NLP method. The three rank correlations all take down the performance of traditional NLP method when combined with it. The possible reason is that the ranked scores of comments are obtained by pre-trained comment ranking model which has a limitation of performance.
(3) Both ARC and ARR make contributions to the performance which means combining the information of Q-C pair is helpful to find related questions. Table 4, we observe the similar results with those in subtask A, i.e., traditional features make contribution to comment ranking. Moreover, the performance is improved by adding features extracted from Q-Q pair, which indicates that the information extracted from Q-Q pair makes significant contribution to answer ranking subtask. However, above results are evaluated using MAP on top 10 comments. Therefore the errors introduced in question retrieval (subtask B) would be prorogated to answer ranking (subtask A) and finally reduce the whole performance of CQA (subtask C).
Results on Subtask C
To solve this problem, we investigated a two-step method to first filter unrelated comments and then filter unrelated answers. Figure 3 shows the results of two filtering methods on MAP metric, where N represents the number of top related questions and M represents the number of top ranked answers. From left subplot of the Figure 3, we see that the best performance with filtering operation are much higher than the best score (M AP = 39.39%) without any filtering. The best performance 44.97% is obtained when N = 5 and M = 10. The reason may be that the filtering operation of unrelated questions can take away many unrelated comments for original question. The right subplot of the Figure 3 shows the performance curve (N = 5) when increasing the values of M . Similarly, the performance increases with M increasing from 7 to 9 and it achieves the best score of 46.07% when N = 5 and M = 8.
System Configuration
Based on above experimental analysis, the three system configurations are as followings:
(1) subtask A: We used the combination of traditional method, CNN and BLSTM as the primary run in the test set. Traditional method and BLSTM serve as contrastive1 run and contrastive2 run.
(2) subtask B: Traditional method with Method 2 is used as primary run in the test set. The combination of traditional method with Method 2 and Lucene is contrastive1 run and traditional method alone is contrastive2 run.
(3) subtask C: The two-step filtering operation with N = 5 and M = 8 serves as primary run in the test set. The two-step filtering operation with N = 4 and M = 7 is contrastive1 run. Traditional features adding Q-Q pair information is used as contrastive2 run. From the results, we find: (1) In subtask A, the combination of three methods significantly improve the performance over the traditional method and BLSTM, which is consistent with the results on training data as our expectation. (2) In subtask B, the result using traditional features is higher than Lucene but still has a certain gap with the best result. The possible reason may be because several traditional features do not work well in the test set.
Results on Test Data
(3) In subtask C, beyond our expectation, the method using two-step filtering operation does not make obvious contribution. The possible reason may be that the values of M and N are not suitable for test set.
Conclusion
In this paper, we proposed multiple strategies (i.e., traditional method of extracting features and deep learning models) to address Community Question Answering task in SemEval 2016. For subtask A, we trained a classifier and learned the questioncomment representation based CNN and BLSTM. The combination of three models obtains the best results. For subtask B, we proposed two novel methods to improve semantic similarity estimation between Q-Q pairs by utilizing the information of Q-C ranking. For subtask C, we employed a two step filtering strategy to reduce the noise which taking from unrelated comments. The results on test set show the effectiveness of our methods.
Figure 1 :
1An illustration of CNN for question-comment similarity estimation.
Figure 2 :
2An illustration of BLSTM model for questioncomment similarity estimation.
Figure 3 :
3The results of subtask C using two-step filtering operation.
Table 2
2shows the results of subtask A with three different methods.Firstly, all CI, TB, TMB, and LSS features significantly over WM baseline. Since CI is a measure of the informativeness of comment text, this indicates that users trend to choose the comment with more information. TB can learn word alignment between different words. Unlike the surface word matching features which only consider the surface word, theMethods
Features MAP(%)
WM
57.13
Traditional
.+TB
58.91
NLP
.+TMB
61.37
Features
.+CI
63.03
.+LSS
65.37
CNN
-
65.04
BLSTM
-
65.13
Tra + CNN + BLSTM
-
66.84
Table 2: Results of subtask A using different methods. /.+0
means to add current feature to the previous feature set.
Table 3
3summarizes the results of subtask B with
NLP features and integrating the rank information
of Q-C pair. Here Lucene represents using Lucene
Method
Features
MAP(%)
Lucene
BM25
69.95
WM
69.91
Traditional .+TB
70.72
NLP
.+TBM
71.05
Features
.+LSS
72.13
.+QI
74.03
Method 1
Pearson
61.18
Spearman
62.86
Kendall
62.95
Pearson + NLP
68.15
Spearman + NLP
68.49
Kendall + NLP
68.95
Method 2
NLP
74.03
.+ARC
74.25
.+ARR
75.04
Table 3 :
3Results of subtask B using different methods.
Table 4
4depicts the results on subtask C, where WMQ, TMBQ and TBQ represent extracting word matching, topic model based and translation based features on original question and related question. From
Table 4 :
4Results of subtask C with different traditional NLP features.
Table 5
5shows the results on test set which are released by the organizers.subtask
run(rank)
MAP(%)
A
ECNU-primary(4)
77.28
ECNU-contrastive1
71.34
ECNU-contrastive2
75.71
Kelp-primary(1)
79.19
B
ECNU-primary(7)
73.92
ECNU-contrastive1
73.25
ECNU-contrastive2
71.62
UH-PRHLT-primary(1)
76.70
C
ECNU-primary(7)
46.47
ECNU-contrastive1
48.49
ECNU-contrastive2
47.24
SUper team-primary(1)
55.41
Table 5 :
5Our results and the best results on three subtask test sets.
http://www.statmt.org/moses/giza/GIZA++.html 2 http://alt.qcri.org/semeval2015/task3/index.php?id=dataand-tools
http://www.nltk.org/ 4 http://scikit-learn.org/stable/
https://lucene.apache.org/
AcknowledgmentsThis research is supported by grants from Science and Technology Commission of Shanghai Municipality (14DZ2260800 and 15ZR1410700), Shanghai Collaborative Innovation Center of Trustworthy Software for Internet of Things (ZF1213).
. M David, Blei, Y Andrew, Michael I Jordan Ng, Latent dirichlet allocation. the Journal of machine Learning research. 3David M Blei, Andrew Y Ng, and Michael I Jordan. 2003. Latent dirichlet allocation. the Journal of ma- chine Learning research, 3:993-1022.
The mathematics of statistical machine translation: Parameter estimation. F Peter, Brown, J Della Vincent, Stephen A Pietra, Robert L Della Pietra, Mercer, Comput. Linguist. 192Peter F. Brown, Vincent J. Della Pietra, Stephen A. Della Pietra, and Robert L. Mercer. 1993. The mathematics of statistical machine translation: Parameter estima- tion. Comput. Linguist., 19(2):263-311, June.
Distributed representations of words and phrases and their compositionality. Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, Jeff Dean, Advances in Neural Information Processing Systems. C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. WeinbergerCurran Associates, Inc26Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corra- do, and Jeff Dean. 2013. Distributed representations of words and phrases and their compositionality. In C.J.C. Burges, L. Bottou, M. Welling, Z. Ghahramani, and K.Q. Weinberger, editors, Advances in Neural In- formation Processing Systems 26, pages 3111-3119. Curran Associates, Inc.
Semeval-2016 task 3: Community question answering. Preslav Nakov, Lluís Màrquez, Walid Magdy, Alessandro Moschitti, Jim Glass, Bilal Randeree, Proceedings of the 10th International Workshop on Semantic Evaluation. the 10th International Workshop on Semantic EvaluationBerlin, GermanyAssociation for Computational LinguisticsPreslav Nakov, Lluís Màrquez, Walid Magdy, Alessan- dro Moschitti, Jim Glass, and Bilal Randeree. 2016. Semeval-2016 task 3: Community question answer- ing. In Proceedings of the 10th International Work- shop on Semantic Evaluation (SemEval 2016), Berlin, Germany, August. Association for Computational Lin- guistics.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014). the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP 2014)Jeffrey Pennington, Richard Socher, and Christopher D. Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 Conference on Empirical Methods in Natural Language Process- ing (EMNLP 2014), pages 1532-1543.
Gibbsl-da++: Ac/c++ implementation of latent dirichlet allocation (lda). Xuan-Hieu Phan, Cam-Tu Nguyen, Xuan-Hieu Phan and Cam-Tu Nguyen. 2007. Gibbsl- da++: Ac/c++ implementation of latent dirichlet allo- cation (lda).
Learning to rank answers to nonfactoid questions from web collections. Computational Linguistics. Mihai Surdeanu, Massimiliano Ciaramita, Hugo Zaragoza, 37Mihai Surdeanu, Massimiliano Ciaramita, and Hugo Zaragoza. 2011. Learning to rank answers to non- factoid questions from web collections. Computation- al Linguistics, 37(2).
Learning to rank. Andrew Trotman, Information Retrieval. 83Andrew Trotman. 2005. Learning to rank. Information Retrieval, 8(3):359-381.
Retrieval models for question and answer archives. Xiaobing Xue, Jiwoon Jeon, W Bruce Croft, Proceedings of the 31st Annual International ACM SI-GIR Conference on Research and Development in Information Retrieval, SIGIR '08. the 31st Annual International ACM SI-GIR Conference on Research and Development in Information Retrieval, SIGIR '08New York, NY, USAACMXiaobing Xue, Jiwoon Jeon, and W. Bruce Croft. 2008. Retrieval models for question and answer archives. In Proceedings of the 31st Annual International ACM SI- GIR Conference on Research and Development in In- formation Retrieval, SIGIR '08, pages 475-482, New York, NY, USA. ACM.
Question answering using enhanced lexical semantic models. Ming-Wei Wen-Tau Yih, Christopher Chang, Andrzej Meek, Pastusiak, Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics. the 51st Annual Meeting of the Association for Computational LinguisticsSofia, BulgariaAssociation for Computational Linguistics1Wen-tau Yih, Ming-Wei Chang, Christopher Meek, and Andrzej Pastusiak. 2013. Question answering using enhanced lexical semantic models. In Proceedings of the 51st Annual Meeting of the Association for Com- putational Linguistics (Volume 1: Long Papers), pages 1744-1753, Sofia, Bulgaria, August. Association for Computational Linguistics. |
256,461,180 | Identifying Code-switching in Arabizi | We describe a corpus of social media posts that include utterances in Arabizi, a Romanscript rendering of Arabic, mixed with other languages, notably English, French, and Arabic written in the Arabic script. We manually annotated a subset of the texts with word-level language IDs; this is a non-trivial task due to the nature of mixed-language writing, especially on social media. We developed classifiers that can accurately predict the language ID tags. Then, we extended the word-level predictions to identify sentences that include Arabizi (and code-switching), and applied the classifiers to the raw corpus, thereby harvesting a large number of additional instances. The result is a large-scale dataset of Arabizi, with precise indications of code-switching between Arabizi and English, French, and Arabic. | [
32194977,
17584674,
52967399,
43963368,
12953514,
235211772,
236460241,
9911858,
227230897,
1554898,
51875128,
5188249,
227230396,
218973981,
220045406
] | Identifying Code-switching in Arabizi
December 8, 2022
Safaa Shehadi safa.shehadi@gmail.com
Department of Computer Science
University of Haifa
Israel
Shuly Wintner
Department of Computer Science
University of Haifa
Israel
Identifying Code-switching in Arabizi
Proceedings of the The Seventh Arabic Natural Language Processing Workshop (WANLP)
the The Seventh Arabic Natural Language Processing Workshop (WANLP)December 8, 2022
We describe a corpus of social media posts that include utterances in Arabizi, a Romanscript rendering of Arabic, mixed with other languages, notably English, French, and Arabic written in the Arabic script. We manually annotated a subset of the texts with word-level language IDs; this is a non-trivial task due to the nature of mixed-language writing, especially on social media. We developed classifiers that can accurately predict the language ID tags. Then, we extended the word-level predictions to identify sentences that include Arabizi (and code-switching), and applied the classifiers to the raw corpus, thereby harvesting a large number of additional instances. The result is a large-scale dataset of Arabizi, with precise indications of code-switching between Arabizi and English, French, and Arabic.
Introduction
Arabizi is a writing system for (primarily dialectal) Arabic that uses the Roman alphabet. It is ubiquitous on social media outlets, and has many characteristics of social media writings in other languages (e.g., slang, tendency towards the spoken register, spelling errors, abbreviations, character repetition, use of emoticons, etc.) The use of the Roman alphabet facilitates (and perhaps even encourages) code-switching: moving between Arabic (represented in Arabizi) and other languages, notably English and French, sometimes even within the same sentence.
Code-switching is becoming more and more prevalent as the world's population is becoming more multilingual (Grosjean, 1998). It is a natural phenomenon that is triggered by linguistic, sociolinguistic, psycholinguistic, demographic, and contextual prompts, and has been studied mainly in the spoken language until recently. With the ubiquity of text online, however, code-switching is beginning to be investigated also in the written language (e.g., Solorio and Liu, 2008;Solorio et al., 2014;Aguilar et al., 2018;Solorio et al., 2021). Such research has various practical applications, both for understanding and for generation of codeswitched language (Sitaram et al., 2019;Dogruöz et al., 2021). Our main interest is in code-switching phenomena in Arabizi; in order to better understand them, a large dataset of Arabizi is required.
The main goal of this work is to construct a largecorpus of Arabizi utterances, potentially including instances of code-switching between Arabizi and English, French, or Arabic written in the Arabic script. The dataset is based on social media posts from two outlets: Twitter and Reddit. To collect the data, we implemented a classifier that can identify sentences containing words in Arabizi, Arabic, English, and French, and used it to filter out texts harvested from the two outlets.
We describe the dataset and the methods we used to curate it (Section 3). We then discuss the challenge of determining the language ID of words in multilingual texts, and describe classifiers that can accurately predict such language tags, based on a schema we developed for the task (Section 4). We extend the word-level classifiers to sentence-level ones, assigning a complex tag to each sentence that indicates the presence of words from various categories (i.e., languages) in it (Section 5). Finally, we use the classifiers to extract additional instances of sentences with Arabizi (and with code-switching) from our raw corpus (Section 6). This paper makes several contributions: 1. We release a large-scale corpus of Twitter and Reddit posts that include Arabizi; 2. We introduce a novel annotation scheme that determines the language of words in multilingual utterances; specifically, we advocate a unique tag for words that can be included in more than one mental lexicon (and hence trigger code-switching); 3. We release a portion of the dataset, manually annotated according to this annotation scheme; and 4. We provide highlyaccurate classifiers that can determine the language ID tags of words in this corpus; the classifiers were used to identify hundreds of thousands of additional sentences that are very likely to include Arabizi in general and code-switching with Arabizi in particular. We expect these resources, which are all publicly available, 1 to be instrumental for future research in code-switching and in Arabizi.
Related work
Arabizi has attracted some interest in recent years, and various works address the tasks of detecting it and converting Arabizi to the Arabic script. Darwish (2014) used word-and sequence-level features to identify Arabizi mixed with English and achieved 98.5% accuracy on the identification task. He argued that classifying a word as Arabizi or English has to be done in context, and thus employed sequence labeling using Conditional Random Fields (CRF) for classification. The data were selected from Twitter, by querying (three) commonly used Arabizi words and then extracting the user IDs of all the authors of the resulting tweets, obtaining all their tweets, under the assumption that authors who use Arabizi once may use it often. Then, tweets in which most of the words contained Arabic letters were filtered out. This resulted in 522 tweets consisting of 5207 tokens, of which 1203 were in Arabizi. Cotterell et al. (2014) compiled a corpus of more than half a million pages from an Algerian newspaper website, from which they extracted almost 7M tokens which were annotated for language, using three tags: Arabic, French, or Other. More recently, Samih and Maier (2016) compiled a corpus of Arabic mixed with Moroccan Darija, in which tokens were assigned to seven categories: three for languages, and then mixed (morphemes from more than one language in the same token), named entity, ambiguous and other. In total, 223K tokens were annotated.
The task of transliterating Arabizi to Arabic was addressed by Al-Badrashiny et al. (2014), who employed finite-state transducers, a language model and morphological processors for Arabic. They used a dataset consisting of 1500 words only. This approach was then extended to the Tunisian dialect (Masmoudi et al., 2015). The transliteration task was applied to the Tunisian dialect in a more recent work (Younes et al., 2022), using contemporary machine-learning techniques, but the datasets 1 Available from https://github.com/HaifaCLG/Arabizi. remained relatively small. Shazal et al. (2020) addressed the joint task of identifying Arabizi and transliterating it to the Arabic script, reporting high word accuracy on a large (1M token) dataset. Tobaili (2016) trained an SVM classifier to identify Arabizi in multilingual Twitter data. He assumed that in order to tag a tweet as Arabizi it should have more Arabizi words than English words. The best results were obtained using three features: (1) the languages as detected by Langdetect;
(2) the language as detected by the twitter API; and (3) the count of word occurrences per tweet. The dataset used in this work is small, and has merely 465 Arabizi sentences from Lebanon and 955 from Egypt. Tobaili (2016) also found that the use of Arabizi differed between Egypt and Lebanon (for example, more omission of vowels in the former, and more mixed language in the latter).
Two Arabizi datasets were recently compiled and released (Baert et al., 2020): LAD, a corpus of 7.7M tweets written in Arabizi; and SALAD, a randomly-selected subset of LAD, containing 1700 tweets, manually annotated for sentiment analysis. The tweets were harvested using Twint: Twitter Intelligence Tool, by setting 48 common words in Egyptian as seeds. This work focused mainly on the Egyptian dialect, and the manually-annotated dataset is rather small. Seddah et al. (2020) built the first North-African Arabizi treebank. It contains 1500 sentences, fully annotated with morpho-syntactic and Universal Dependency codes, with full translation at both the word and the sentence levels. It is also supplemented by 50K unlabeled sentences collected using web-crawling. The texts reflect the Algerian dialect, and contain 36% French tokens. Recently, this dataset was extended by adding transliterations of all the Arabizi tokens, as well as sentence-level annotations of sentiment and topic (Touileb and Barnes, 2021). Adouane et al. (2016) focused on the task of identifying Arabizi (and Romanized Bereber) in social media texts, reporting near-perfect accuracy using very simple character-ngram features. The data were collected from North-African sources and reflect these dialects. More recently, Younes et al. (2020) used deep learning methods to identify the language of words in Tunisian social media texts. They defined five categories for the classification (Tunisian dialect words, foreign langauge words, punctuation, symbols, and emoticons) and reported almost perfect accuracy on this task.
One of our goals in this work is to create a large dataset of sentences containing Arabizi, potentially mixed with words in other languages, focusing on the Egyptian and Lebanese dialects. Unlike much existing work, we annotate our dataset at the word level, thereby yielding a richer annotation that clearly outlines sentences with codeswitching. Our language ID annotation scheme acknowledges the difficulty of assigning language ID tags to words that may be shared by more than one mental lexicon; such words, which include proper names and cognates, are assumed to trigger code-switching (Clyne, 2003;Broersma and De Bot, 2006;Broersma, 2009;Soto and Hirschberg, 2019). We then use our annotated dataset to train classifiers that we employ to extract more code-switched Arabizi instances from Reddit and Twitter, thereby extending the scope of our dataset significantly.
Data collection
We conjectured that social media outlets, particularly Reddit and Twitter, would include a sizable amount of Arabizi utterances. To identify them, we modified the method suggested by Rabinovich et al. (2018), which has subsequently been used also to harvest code-switched data from Reddit (Rabinovich et al., 2019).
First, we identified some Reddit fora ('subreddits') where we expected to find Arabizi used. These included r/arab, r/arabs, r/egypt, r/jordan, r/lebanon, and r/syria. We downloaded the entire collection of the above subreddits. The resulting (raw) Reddit dataset consisted of 3,584,915 sentences, 59,593,594 words and 72,305 authors.
For twitter, we followed Darwish (2014) and defined a few dialectal Arabic seed words that we expected to occur with high frequency in Arabizi texts, focusing on the Egyptian dialect (where we expected to find code-switching with English) and the Lebanese dialect (where we expected mixed French). These seed terms are listed in Appendix A. We located and retained tweets that included any of the seed words in our list. We then extracted the user IDs of authors of such texts, under the assumption that authors that use Arabizi in some tweets are likely to use it elsewhere, too; and we included all tweets authored by these users in our corpus. The resulting (raw) Twitter dataset con-sisted of 2,466,642 sentences (22,530,044 words) authored by 1090 users: 936 Egyptians and 154 Lebanese.
We used NLTK (Bird et al., 2009) for sentence boundary detection and tokenization. As the tokenizer did not split emojis from other tokens, we added a simple post-processing step to make sure all emojis were standalone tokens. We removed extra spaces and separated Arabic letters from non-Arabic ones. We also shortened adjacent repeated letters to only two (e.g., we converted 'ahhhhh edaaa thankkk youuuu' to 'ahh edaa thankk youu').
Next, we aimed to identify sentences containing Arabizi in the raw dataset. We first utilized a number of language identification tools, including Spacy (Honnibal et al., 2020), Google's LangDetect (we used the Python port), langid Baldwin, 2011, 2012), and FastText (Joulin et al., 2017). Unsurprisingly, they all failed to detect Arabizi with acceptable accuracy.
To evaluate the accuracy of existing language ID tools on Arabizi we selected 100 sentences from the annotated Arabizi dataset of Tobaili (2016): the first 50 sentences containing only Arabizi words from the Egypt dataset, and the first 50 from the Lebanon dataset. We applied the above-mentioned classifiers to these 100 sentences; since none of the tools was trained on Arabizi data, none predicted Arabizi. But they did not predict Arabic, either: instead, Langdetect defaulted to Somali 43 times, (and Indonesian 25 times); Langid detected English, Spanish-Castilian, Indonesian, and Swahili for 50 of the sentences; Fasttext preferred English and Spanish; and Spacy identified half of the sentences as Somali or Indonesian.
We therefore resorted to defining our own language ID detection model, which we specifically tuned to identifying Arabizi (in addition to English and French). We developed a dedicated scheme for tagging words in a mixed-language dataset (Section 4.1), manually tagged a sizeable number of sentences reflecting the various language combinations witnessed in the dataset (Section 4.2), and then used the manually annotated subset to train classifiers (Sections 4.3-4.4) that can assign language ID tags to words in unseen texts. Finally, we extended the annotation from words to sentences (Section 5) in order to devise an efficient extractor for more instances of code-switched Arabizi from our corpus. We now detail these stages.
Word level classification
Some existing work on Arabizi focused on identifying the language of a sentence, or a larger chunk of text. For example, Tobaili (2016) defined a tweet as Arabizi if it contained at least 50% Arabizi tokens. In contrast, we focus on identifying the language of each individual token in the corpus, as our main motivation is to prepare a dataset suitable for research on code-switching, which may of course be intra-sentential. As mentioned above, existing tools for word-level language ID fail miserably when Arabizi is concerned.
We begin by discussing the challenges involved in word-level annotation of multilingual texts (Section 4.1), detail the manual annotation (Section 4.2), and then discuss our classifiers, both statistical (Section 4.3) and neural (Section 4.4).
Annotation of language ID
Annotating multilingual data for language is challenging, especially where named entities are involved. Much work on code-switching assumes that a switch is defined when two consecutive words come from two different languages; and much cognitive linguistic work focuses on understanding what facilitate such switches. Specifically, it has been suggested that cognates (words in two languages that share a similar form and a similar meaning) facilitate code-switching (Clyne, 2003;Broersma and De Bot, 2006;Soto and Hirschberg, 2019). However, assigning a clear language tag to words in multilingual texts may not always be possible (Clyne, 2003, Chapter 3).
Consider the case of borrowing: a French word may be borrowed by Arabic, and sound like a foreign word initially, during which period its use in an otherwise Arabic sentence may be considered an insertional switch (e.g., balcon 'balcony'). With time, this word may obtain properties of the borrowing language (its phonology might be adapted to Arabic, it may obtain Arabic morphological affixes, etc.), until finally it may be considered by native Arabic speakers, including monolinguals, a common Arabic word. How should such words be tagged during various stages of their assimilation?
Similarly, culturally-specific words in one language may be borrowed into another language simply because they have no translation equivalents in the borrowing language. For example, Arabic alhamdulillah 'thank God ' can be used verbatim in an otherwise English (or French) text. This may extend also to common nouns, for example mjadara 'mujadara, a lentil-based dish '.
A particularly challenging case is named entities (which are often the extreme case of cognates). They can have identical forms in the two languages (e.g., 'Beirut' in Arabic and in English); but they may also be adapted to the phonology of each language, and thus drift apart from each other (e.g., Amreeca 'America', Surya 'Syria', Alqahirah 'Cairo'). The distance between the two forms may be significant (e.g., al-Jazair 'Algeria'). Sometimes, proper names are translated rather than adapted (e.g., al-welayat al-muttahida 'United States'), or use different words altogether (e.g., masr 'Egypt'). What language ID tag should we assign to such tokens in multilingual texts?
Several decisions must be taken in order for the annotation to be consistent, and not all decisions can always be fully justified. Our motivation in devising the annotation scheme was to facilitate consistency by providing clear and easy-to-apply guidelines. We thus defined the following categories: 0: Arabizi including any form variant that may be considered Arabizi; 1: English including common social media variants of words such as spelling errors, shorthand (Idk 'I don't know', plz 'please'), letter repetition (nooooo 'no', Cuuute 'cute'), etc.; 2: French with similar social media accommodations; 3: Arabic written in the Arabic script; 4: Shared see below; 5: Other tokens that are either non-linguistic or common to several languages. These include punctuation marks, numbers, emoticons and emojis, etc. As we focus only on Arabic, English and French, we also mark tokens in other languages as 'other'. Examples include 'Bhag hindu ka baccha', 'Eww! ', '12k? ', and 'ahahahaha'. Notice that morphological indications of language may change a token from 'Other' to that language; e.g., '1st' or '3rd ' are considered English.
In light of our focus on code-switching, we defined the category shared to include words that we have reasons to believe may belong to more than one mental lexicon (or, alternatively, to a shared mental lexicon). In the linguistic literature, trigger words are defined as words that are positively associated with code-switching, either because they are cognates or because they increase the facilitation of the other language (Clyne, 2003;Broersma and De Bot, 2006). Our annotation guidelines were the following; notice that in all these cases, the annotation is context-independent: the same token will be tagged uniformly independently of where it occurs.
• Arabizi named entities which have different (translated) counterparts in English are tagged as Arabizi, and their translation equivalents are considered English; e.g., Al-Emirat Al-Arabiya Al-mutahida 'United Arab Emirates', masr 'Egypt', al-maghrib 'Morocco'. • Named entities in Arabizi and English that are not translated, and hence are written in a similar way in both languages, are considered as shared words; e.g., al-ordon 'Jordan', alqahirah 'Cairo', Lubnan 'Lebanon'. • Culturally-dependent terms that have no translation equivalent in the other language are tagged as shared; e.g., mjadara 'mujadara', alhamdulillah 'thank God ', ramadan 'ramadan', muezzin 'muezzin'. • This also extends to loan words that do not have translation equivalents in the borrowing languages e.g., video 'video', or where the loan word is commonly used even if a translation exists; e.g., taxi 'taxi ', mobile 'cellphone'. To demonstrate the word-level annotation, consider the following examples:
• Ask for Mjadara Hamra
Here, the first two tokens are obviously English ('1'), while the third token is tagged '4' for shared. The fourth token, Hamra 'red ', raises a question: is it the adjective 'red ', in which case it should be tagged '0' for Arabizi, or is it part of a named entity that includes Mjadara 'mujadara', in which case it should be '4' for shared? We opted for the former. In contrast, in • even the humble kibbe nayeh
We tagged the first 3 tokens as English ('1'), and kibbe nayeh 'raw kibbe', where kibbe is a popular dish consisting of meat and bulgur, but nayeh 'raw' changes its meaning to a different dish made from raw meat, were both tagged '4' for shared as we considered them part of a single named entity.
A particularly interesting example is
• Nis-har youm el sabt 3al Balcon which means 'We stay up Saturday night on the balcony'. The verb nishar 'we spend the evening' was probably spelled with a dash in order to prevent the 'sh ' from being pronounced as English [sh]. We tagged all tokens '0' for Arabizi, except the last one which was tagged '4' for shared. Finally, some cases involved intra-word codeswitching. In • ma2darsh a subtweet u da mabda2yan 'I can't subtweet you, this is tentative'
the English 'subtweet' is used as a verb, with the Arabic prefix 'a' which is a derivational morpheme that converts nouns to verbs; the result is asubtweet 'to subtweet'. In this case, the author introduced a space between the two mporphemes so we could tag 'a' as Arabizi and 'subtweet' as English. In another example, ana ba-act 'I act', the author used a dash between the Arabizi prefix 'ba' and the English verb 'act', so again we could tag both morphemes separately. We do not have a special tag for tokens that involve morphemes in more than one language because no such case was witnessed in our dataset.
Manual annotation
From the raw datasets we described in Section 3, we initially manually annotated 1050 sentences (roughly 500 each from Reddit and Twitter) at the word level, assigning a tag of '0' to '5' to each token. 2 We then used the classifier described below (Section 4.3) to identify more "interesting" samples in the entire dataset (the vast majority of the sentences in the dataset are naturally plain English sentences). Of those, we manually selected more sentences that reflected as best as possible the diversity of sentence types in the dataset, and manually corrected the predictions of the classifier. This process resulted in 2643 manually annotated sentences, over 1000 of which including Arabizi words, which constitute the final word-level annotated dataset on which we train and evaluate our classifiers. The details are summarized in Table 1 (note that not all sentences in a given post were annotated).
Statistic classification
We begin with more conservative statistic classification. Since the tag of a given token is highly Reddit 922 980 13752 Twitter 1653 1663 16061 Total 2575 2643 29813 Table 1: Word-level annotated dataset. depepndent on the tags of its predecessors, we used CRF (Lafferty et al., 2001) to train a sequence-tosequence classifier. We used the following features to represent each instance (token):
Dataset Posts Sents. Tokens
• The word itself in lowercase;
• Are all the word's letters uppercase?;
• Is only the first letter uppercase?;
• Is the word in the (freely-available list of) 5050 most frequent English words, taken from the one billion word Corpus of Contemporary American English?; • Is the word in the 930 most frequent French words?; • Is it an Arabic word? We used CAMel tools (Obeid et al., 2020) in order to detect Arabic words; • Does the word contain numerals? This is useful because digits are used to represent Arabic letters in Arabizi; • All the features above, with respect to the previous word; • Is it the first word in the sentence?; • Is it the last word in the sentence? Here and elsewhere, we used ten-fold crossvalidation for evaluation. Table 2 lists the evaluation results (precision, recall and F1) for each category separately, as well as the number of words of each category in the test set ("support"). It also shows the total evaluation metrics, averaged over all categories (we report micro-, macro-and weighted averages). The total accuracy, over the entire test set, is 0.949.
Neural classification
We also experimented with more contemporary neural classification. We defined a deep neural network consisting of three layers: (1) An embedding layer which is the concatenation of the last 4 layers of a BERT (Devlin et al., 2019) model (we used the multilingual uncased version); (2) A bidirectional LSTM (Hochreiter and Schmidhuber, 1997) layer: 2 hidden layers of size 400, and dropout of 0.5;
(3) A CRF layer (Huang et al., 2015).
We used the BERT tokenizer, in the multilingual uncased version, to tokenize the text. As the tokenizer is different, the number of tokens differs slightly from the case of statistical classification (this explains the differences in the support size between Tables 2 and 3). More importantly, BERT's predictions are provided for units (sub-tokens) that we did not manually annotate. As is common in such cases, for each original token that was split by BERT we selected the tag of the first sub-token and induced it over the other sub-tokens to which the original token was split. Of course, this may harm the accuracy of the neural classifier. We used the Adam optimizer with a learning rate of 0.001 and cross-entropy loss. We trained the model for four epochs and chose a batch size of 32. The results are listed in Table 3. The total accuracy, over the entire test set, is 0.952, almost identical to the accuracy of the statistic classifier.
Identifying code-switching
The word-level annotation immediately facilitates the identification of code-switching: a sentence with at least one word in Arabizi and one in either English or French necessarily includes a switch. To simplify this task, we now annotate full sentences: we assign complex tags to sentences that reflect the existence of each of our six word categories in a given sentence. The tags consist of six bits, each referring to the presence in the sentence of words categorized as Arabizi, English, French, Arabic, shared, and Other. This Table 4 lists the number of samples associated with each 6-bit tag in the annotated dataset. For example, the sentence • good luck albi, have a nice dayy <3 'good luck my love, have a nice day ♡' is associated with the tag 110001, reflecting the presence of English, Arabizi and an emoticon (note that we treat the misspelled 'dayy' as a valid English word). More example sentences include:
• "Khalas tamam , you know best" 'Okay, you know best' . (110000) • happiest birthday ya hussein :)
'happiest birthday oh hussein :) ' (110011) • Take a flight to Jeddah w ishtiri al baik 'Take a flight to Jeddah and buy the bike' (110010, as 'Jeddah ' is shared) Note that we do not commit on the precise location of the switch; when a sentence contains shared words, they may serve as wildcards for determining this location. For example, in the last sentence above, the switch may occur before or after the shared word 'Jeddah '.
Direct classification
First, we trained a statistic classifier to directly predict the 6-bit tags. We experimented with various statistic classification models, including SVM, logistic regression, KNN, and random forest. The latter yielded the best accuracy, so the results we report below were obtained with random forest. We used the following features:
• Character uni-gram, bi-gram and tri-gram counts, normalized by the number of characters in the sentence. We only used the most frequent 250 n-grams; • Number of English, Arabic and French words, all normalized by the number of tokens in the sentence (excluding emojis); • The number of tokens that contain numeric digits, normalized by the number of tokens in the sentence; • The normalized number of emojis, punctuation and numbers in the sentence, to help identify the category Other; Occurrences 0 1 0 0 0 1 604 0 1 0 0 1 1 297 0 1 0 0 0 0 • The number of English words detected by fast-Text with confidence score greater than 0.95; • The number of French words detected by fast-Text with confidence score greater than 0.5; • The number of words that do not belong to any of the previous categories, which helps detect Arabizi and Other;
A r a b i z i E n g l i s h F r e n c h A r a b i c S h a r e d O t h e r
• A binary flag which checks whether the whole sentence was detected by fastText as English with confidence score greater than 0.8. We observed that sentences with score greater than 0.8 tend to actually include English words, but pure Arabizi sentences are sometimes erroneously classified as English with lower confidence; • A binary flag which checks whether the whole sentence was detected as French with confidence score greater than 0.3; • A binary flag which checks whether the whole sentence was detected as some language other than French, English, or Arabic. This helps detecting Arabizi and other languages. We used ten-fold cross-validation and evaluated the accuracy of the model in predicting each of the bits in the tag vector independently (i.e., predicting whether a given sentence includes words in English, Arabizi, French, etc.) The accuracy results on each category are listed in Table 5. The total accuracy of assigning the exact 6-bit tag to each sentence is 0.62.
Tag
Acc
Indirect classification
As an alternative to direct classification, it is possible to combine the predictions of the word-level classifiers (Section 4) and create 6-bit tags for each sentence. Recall that tags at the sentence level only indicate the existence of words from a given category in the sentence (rather than whether all words in the sentence are annotated correctly). The results of inducing sentence-level tags from the word-level ones (as obtained by the statistic classifier, Section 4.3) are listed in Table 6. The total accuracy of correctly identifying the complex, 6bit tag is 0.78, much better than with the direct classifier. Note that in both approaches, the identification of Arabic is perfect, most likely owing to the different character set of Arabic; and in both cases, shared words are the most challenging to identify (recall that they were also hard to manually annotated). The accuracy on French is low, probably because of the small number of sentences with French words in the training data.
6 Harvesting more data
With the highly accurate classifiers described above, we set out to extend our corpus of Arabizi in general and Arabizi code-switching in particular. We applied the statistic word-level classifier (Section 4.3) to the entire dataset we collected from Reddit and Twitter (Section 3). We extracted all the sentences that included at least one Arabizi word, and associated each token in these sentences with its language ID tag; we also decorated the entire sentence with the complex 6-bit tag that indicates which languages are included in it. This resulted in a set of over 880K sentences, which constitutes our automatically-obtained dataset of Arabizi (see Table 7). This dataset, we trust, will be an invaluable resource for research in Arabizi and in code-switching. As an additional verification of the dataset, we randomly chose 100 sentences (50 each from Reddit and Twitter) that were annotated as including at least two tokens each in both Arabizi and English (hence, that included code-switching) and manually inspected them. Of the 100, 77 (42 from Twitter, 35 from Reddit) indeed included code-switching between English and Arabizi.
Reddit
A qualitative analysis of the errors revealed several cases in which a nonstandard spelling of English was erroneously considered Arabizi. For example, in the fully English wtf yo where da love go, our classifier identified 'da' as Arabizi, probably because it is a common Egyptian word meaning 'this'. Similarly, in I ' m sorry 4 ya loss the classifier unsurprisingly identified 'ya' as Arabizi.
Some proper nouns that we tagged as shared, especially those whose origin is Arabic, were predicted as Arabizi. E.g., in They also mentioned a new location ; somewhere in sin el fil, the last three tokens were predicted Arabizi, but we tagged them as shared (the name of a suburb of Beirut). Finally, tokens that involve both letters and digits were sometimes erroneously tagged as Arabizi (e.g., I have the 20GB 2Mbps plan).
Conclusion
We described a classifier that identifies words in Arabizi, English, Arabic, and French in multilingual sentences from social media. We applied the classifier to a large set of sentences collected from Twitter and Reddit, and produced a huge dataset of more than 880K automatically-annotated Arabizi sentences, of which over 446K include codeswitching with either English or French.
We are now ready to use this dataset for a largescale corpus-based investigation of theoretical research questions in cognitive linguistics. Specifically, we are interested in the correlation between shared words, as defined in our annotation scheme, and code-switching. We leave such investigations for future work.
Ethical considerations and limitations
This research was approved by the University of Haifa IRB. We collected data from two social media outlets, Reddit and Twitter, in compliance with their terms of service (Reddit, Twitter). For the latter, we distribute tweet IDs and sentence IDs instead of the actual sentences, in line with Twitter's terms of use. For anonymity, we systematically replaced all user IDs (in both datasets) by unique IDs; we do not have, and therefore do not distribute, any personal information of the authors. With this additional level of anonymization, we anticipate very minimal risk of abuse or dual use of the data.
Like any other dataset, the corpus we report on here is not representative. In particular, it probably includes Arabizi as used mainly in Egypt and in Lebanon but not elsewhere in the Arab-speaking world. It is very likely unbalanced in terms of any demographic aspect of its authors. Clearly, the automatic annotation of language IDs is not perfect, and may introduce noise. Use of this corpus for linguistic research must therefore be done with caution. Nevertheless, we trust that the sheer size of the dataset would make it instrumental for research on code-switching in general and in Arabizi in particular.
A Lists of seed words
We collected data from Reddit and Twitter based on texts that included the following words.
Lebanese bya3ref 'he knows', ma3leh 'never mind ', be7ke 'to say', halla2 'now', ma32ool 'reasonable', 3shen 'in order to', 3am (present tense particle) mazboot 'alright' kteer 'many/much ' 3lay/3layki 'on me/on you f em '.
Egyptian awy 'very/very much ', kwayes 'OK ', ezai 'how', 5ales 'never', 7a2ee2y 'really', m3lesh 'never mind ', howa-eh 'what'.
Interestingly, the word mazboot 'alright' means 'strong' in Hindi, so it yielded many false positives. However, since it also resulted in having many relevant Lebanese tweets, we manually scanned them and removed irrelevant users. Similarly, the world awy 'very' is highly indicative of the Egyptian dialect, but it is also used as an abbreviation of the English word 'away'. Attempting to use the seed words baddi 'I want' and balki 'maybe', both highly widespread in Lebanon, resulted in harvesting many irrelevant texts; upon inspection we revealed that these words are frequent proper names in India. They were therefore removed from the seed word list.
202
Table 2 :
2Results: word-level statistic classification.
Table 3 :
3Results: word-level neural classification.
Table 4 :
4Distribution of sentence-level tags in the annotated dataset.
Table 5 :
5Results: sentence-level direct classification.
Table 6 :
6Results: indirect sentence-level classification.
Table 7 :
7The automatically-annotated dataset. Number of sentences with at least one Arabizi token (With Arabizi); with a majority of Arabizi tokens (Arabizi); and with code-switching between Arabizi and English (Ar-En CS) and between Arabizi and French (Ar-Fr CS).
Manual annotation was performed by the first author, who is a native speaker of Palestinian Arabic and fluent in English. The main challenge was the identification of shared words, which required discussion between the two authors, as well as with colleagues.
AcknowledgementsWe thank Melinda Fricke, Yulia Tsvetkov, Yuli Zeira, and the anonymous reviewers for their valuable feedback and suggestions. This work was supported in part by grant No. 2019785 from the United States-Israel Binational Science Foundation (BSF), and by grants No. 2007960, 2007656, 2125201 and 2040926 from the United States National Science Foundation (NSF).
Romanized Berber and Romanized Arabic automatic language identification using machine learning. Wafia Adouane, Nasredine Semmar, Richard Johansson, Proceedings of the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3). the Third Workshop on NLP for Similar Languages, Varieties and Dialects (VarDial3)Osaka, JapanThe COLING 2016 Organizing CommitteeWafia Adouane, Nasredine Semmar, and Richard Jo- hansson. 2016. Romanized Berber and Romanized Arabic automatic language identification using ma- chine learning. In Proceedings of the Third Work- shop on NLP for Similar Languages, Varieties and Dialects (VarDial3), pages 53-61, Osaka, Japan. The COLING 2016 Organizing Committee.
Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task. Gustavo Aguilar, Fahad Alghamdi, Victor Soto, Mona Diab, Julia Hirschberg, Thamar Solorio, Proceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching. the Third Workshop on Computational Approaches to Linguistic Code-SwitchingGustavo Aguilar, Fahad AlGhamdi, Victor Soto, Mona Diab, Julia Hirschberg, and Thamar Solorio. 2018. Named entity recognition on code-switched data: Overview of the CALCS 2018 shared task. In Pro- ceedings of the Third Workshop on Computational Approaches to Linguistic Code-Switching, pages 138- 147.
Automatic transliteration of Romanized dialectal Arabic. Mohamed Al-Badrashiny, Ramy Eskander, Nizar Habash, Owen Rambow, 10.3115/v1/W14-1604Proceedings of the Eighteenth Conference on Computational Natural Language Learning. the Eighteenth Conference on Computational Natural Language LearningAnn Arbor, MichiganAssociation for Computational LinguisticsMohamed Al-Badrashiny, Ramy Eskander, Nizar Habash, and Owen Rambow. 2014. Automatic transliteration of Romanized dialectal Arabic. In Proceedings of the Eighteenth Conference on Com- putational Natural Language Learning, pages 30-38, Ann Arbor, Michigan. Association for Computational Linguistics.
Arabizi language models for sentiment analysis. Gaétan Baert, Souhir Gahbiche, Guillaume Gadek, Alexandre Pauchet, 10.18653/v1/2020.coling-main.51Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsBarcelona, SpainOnline). International Committee on Computational LinguisticsGaétan Baert, Souhir Gahbiche, Guillaume Gadek, and Alexandre Pauchet. 2020. Arabizi language models for sentiment analysis. In Proceedings of the 28th International Conference on Computational Linguis- tics, pages 592-603, Barcelona, Spain (Online). In- ternational Committee on Computational Linguistics.
Natural Language Processing with Python. Steven Bird, Ewan Klein, Edward Loper, O'Reilly Media, Sebastopol, CASteven Bird, Ewan Klein, and Edward Loper. 2009. Nat- ural Language Processing with Python. O'Reilly Media, Sebastopol, CA.
Triggered codeswitching between cognate languages. Mirjam Broersma, Bilingualism: Language and Cognition. 124Mirjam Broersma. 2009. Triggered codeswitching be- tween cognate languages. Bilingualism: Language and Cognition, 12(4):447-462.
Triggered codeswitching: A corpus-based evaluation of the original triggering hypothesis and a new alternative. Mirjam Broersma, Kees De Bot, Bilingualism: Language and cognition. 91Mirjam Broersma and Kees De Bot. 2006. Triggered codeswitching: A corpus-based evaluation of the original triggering hypothesis and a new alternative. Bilingualism: Language and cognition, 9(1):1-13.
Dynamics of language contact: English and immigrant languages. Cambridge approaches to language contact. G Michael, Clyne, Cambridge University PressCambridgeMichael G. Clyne. 2003. Dynamics of language contact: English and immigrant languages. Cambridge ap- proaches to language contact. Cambridge University Press, Cambridge.
An Algerian Arabic-French code-switched corpus. Ryan Cotterell, Adithya Renduchintala, Naomi Saphra, Chris Callison-Burch, Proceedings of the First Workshop on Free/Open-Source Arabic Corpora and Corpora Processing Tools. the First Workshop on Free/Open-Source Arabic Corpora and Corpora Processing ToolsAssociation for Computational LinguisticsRyan Cotterell, Adithya Renduchintala, Naomi Saphra, and Chris Callison-Burch. 2014. An Algerian Arabic- French code-switched corpus. In Proceedings of the First Workshop on Free/Open-Source Arabic Cor- pora and Corpora Processing Tools. Association for Computational Linguistics.
Arabizi detection and conversion to Arabic. Kareem Darwish, 10.3115/v1/W14-3629Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP). the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP)Doha, QatarAssociation for Computational LinguisticsKareem Darwish. 2014. Arabizi detection and conver- sion to Arabic. In Proceedings of the EMNLP 2014 Workshop on Arabic Natural Language Processing (ANLP), pages 217-224, Doha, Qatar. Association for Computational Linguistics.
BERT: pre-training of deep bidirectional transformers for language understanding. Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, 10.18653/v1/n19-1423Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019. the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019Association for Computational LinguisticsJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2019. BERT: pre-training of deep bidirectional transformers for language under- standing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT 2019, pages 4171-4186. Association for Computational Linguistics.
A survey of code-switching: Linguistic and social perspectives for language technologies. A Seza Dogruöz, Sunayana Sitaram, Barbara E Bullock, Almeida Jacqueline Toribio, 10.18653/v1/2021.acl-long.131Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing. the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language ProcessingAssociation for Computational LinguisticsA. Seza Dogruöz, Sunayana Sitaram, Barbara E. Bul- lock, and Almeida Jacqueline Toribio. 2021. A sur- vey of code-switching: Linguistic and social per- spectives for language technologies. In Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, pages 1654-1666. Association for Computational Linguistics.
Studying bilinguals: Methodological and conceptual issues. François Grosjean, 10.1017/S136672899800025XBilingualism: Language and Cognition. 12François Grosjean. 1998. Studying bilinguals: Method- ological and conceptual issues. Bilingualism: Lan- guage and Cognition, 1(2):131 -149.
Long short-term memory. Sepp Hochreiter, Jürgen Schmidhuber, 10.1162/neco.1997.9.8.1735Neural computation. 98Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation, 9(8):1735- 1780.
Sofie Van Landeghem, and Adriane Boyd. 2020. spaCy: Industrialstrength Natural Language Processing in Python. Matthew Honnibal, Ines Montani, 10.5281/zenodo.1212303Matthew Honnibal, Ines Montani, Sofie Van Lan- deghem, and Adriane Boyd. 2020. spaCy: Industrial- strength Natural Language Processing in Python.
Bidirectional LSTM-CRF models for sequence tagging. Zhiheng Huang, Wei Xu, Kai Yu, abs/1508.01991CoRRZhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidi- rectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991.
Bag of tricks for efficient text classification. Armand Joulin, Edouard Grave, Piotr Bojanowski, Tomas Mikolov, Proceedings of the 15th Conference of the European Chapter. the 15th Conference of the European ChapterShort Papers; Valencia, SpainAssociation for Computational Linguistics2Armand Joulin, Edouard Grave, Piotr Bojanowski, and Tomas Mikolov. 2017. Bag of tricks for efficient text classification. In Proceedings of the 15th Con- ference of the European Chapter of the Association for Computational Linguistics: Volume 2, Short Pa- pers, pages 427-431, Valencia, Spain. Association for Computational Linguistics.
Conditional random fields: Probabilistic models for segmenting and labeling sequence data. John Lafferty, Andrew Mccallum, Fernando Pereira, Proceedings of the 18th International Conference on Machine Learning (ICML-01). the 18th International Conference on Machine Learning (ICML-01)San FranciscoMorgan KaufmannJohn Lafferty, Andrew McCallum, and Fernando Pereira. 2001. Conditional random fields: Probabilis- tic models for segmenting and labeling sequence data. In Proceedings of the 18th International Conference on Machine Learning (ICML-01), pages 282-289, San Francisco. Morgan Kaufmann.
Cross-domain feature selection for language identification. Marco Lui, Timothy Baldwin, Proceedings of 5th International Joint Conference on Natural Language Processing. 5th International Joint Conference on Natural Language ProcessingChiang Mai, Thailand. Asian Federation of Natural Language ProcessingMarco Lui and Timothy Baldwin. 2011. Cross-domain feature selection for language identification. In Pro- ceedings of 5th International Joint Conference on Natural Language Processing, pages 553-561, Chi- ang Mai, Thailand. Asian Federation of Natural Lan- guage Processing.
Langid.Py: An off-the-shelf language identification tool. Marco Lui, Timothy Baldwin, Proceedings of the ACL 2012 System Demonstrations. the ACL 2012 System DemonstrationsUSAAssociation for Computational LinguisticsMarco Lui and Timothy Baldwin. 2012. Langid.Py: An off-the-shelf language identification tool. In Pro- ceedings of the ACL 2012 System Demonstrations, page 25-30, USA. Association for Computational Linguistics.
Arabic transliteration of Romanized Tunisian dialect text: A preliminary investigation. Abir Masmoudi, Nizar Habash, Mariem Ellouze, Yannick Estève, Lamia Hadrich Belguith, Computational Linguistics and Intelligent Text Processing. ChamSpringer International PublishingAbir Masmoudi, Nizar Habash, Mariem Ellouze, Yan- nick Estève, and Lamia Hadrich Belguith. 2015. Ara- bic transliteration of Romanized Tunisian dialect text: A preliminary investigation. In Computational Lin- guistics and Intelligent Text Processing, pages 608- 619, Cham. Springer International Publishing.
CAMeL tools: An open source python toolkit for Arabic natural language processing. Ossama Obeid, Nasser Zalmout, Salam Khalifa, Dima Taji, Mai Oudah, Bashar Alhafni, Go Inoue, Fadhl Eryani, Alexander Erdmann, Nizar Habash, Proceedings of the 12th Language Resources and Evaluation Conference. the 12th Language Resources and Evaluation ConferenceMarseille, FranceEuropean Language Resources AssociationOssama Obeid, Nasser Zalmout, Salam Khalifa, Dima Taji, Mai Oudah, Bashar Alhafni, Go Inoue, Fadhl Eryani, Alexander Erdmann, and Nizar Habash. 2020. CAMeL tools: An open source python toolkit for Ara- bic natural language processing. In Proceedings of the 12th Language Resources and Evaluation Confer- ence, pages 7022-7032, Marseille, France. European Language Resources Association.
CodeSwitch-Reddit: Exploration of written multilingual discourse in online discussion forums. Ella Rabinovich, Masih Sultani, Suzanne Stevenson, 10.18653/v1/D19-1484Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP)Hong Kong, ChinaAssociation for Computational LinguisticsElla Rabinovich, Masih Sultani, and Suzanne Stevenson. 2019. CodeSwitch-Reddit: Exploration of written multilingual discourse in online discussion forums. In Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Lan- guage Processing (EMNLP-IJCNLP), pages 4776- 4786, Hong Kong, China. Association for Computa- tional Linguistics.
Native language cognate effects on second language lexical choice. Ella Rabinovich, Yulia Tsvetkov, Shuly Wintner, Transactions of the Association for Computational Linguistics. 6Ella Rabinovich, Yulia Tsvetkov, and Shuly Wintner. 2018. Native language cognate effects on second lan- guage lexical choice. Transactions of the Association for Computational Linguistics, 6:329-342.
An Arabic-Moroccan Darija code-switched corpus. Younes Samih, Wolfgang Maier, Proceedings of the Tenth International Conference on Language Resources and Evaluation (LREC'16). the Tenth International Conference on Language Resources and Evaluation (LREC'16)Portorož, SloveniaEuropean Language Resources Association (ELRAYounes Samih and Wolfgang Maier. 2016. An Arabic- Moroccan Darija code-switched corpus. In Proceed- ings of the Tenth International Conference on Lan- guage Resources and Evaluation (LREC'16), pages 4170-4175, Portorož, Slovenia. European Language Resources Association (ELRA).
Building a user-generated content North-African Arabizi treebank: Tackling hell. Djamé Seddah, Farah Essaidi, Amal Fethi, Matthieu Futeral, Benjamin Muller, Pedro Javier Ortiz Suárez, Benoît Sagot, Abhishek Srivastava, 10.18653/v1/2020.acl-main.107Proceedings of the 58th. the 58thDjamé Seddah, Farah Essaidi, Amal Fethi, Matthieu Futeral, Benjamin Muller, Pedro Javier Ortiz Suárez, Benoît Sagot, and Abhishek Srivastava. 2020. Build- ing a user-generated content North-African Arabizi treebank: Tackling hell. In Proceedings of the 58th
Annual Meeting of the Association for Computational Linguistics. Online. Association for Computational LinguisticsAnnual Meeting of the Association for Computational Linguistics, pages 1139-1150, Online. Association for Computational Linguistics.
A unified model for Arabizi detection and transliteration using sequence-to-sequence models. Ali Shazal, Aiza Usman, Nizar Habash, Proceedings of the Fifth Arabic Natural Language Processing Workshop. the Fifth Arabic Natural Language Processing WorkshopBarcelona, SpainAssociation for Computational LinguisticsAli Shazal, Aiza Usman, and Nizar Habash. 2020. A unified model for Arabizi detection and translitera- tion using sequence-to-sequence models. In Proceed- ings of the Fifth Arabic Natural Language Processing Workshop, pages 167-177, Barcelona, Spain (Online). Association for Computational Linguistics.
A survey of code-switched speech and language processing. Sunayana Sitaram, Khyathi Raghavi Chandu, Krishna Sai, Alan W Rallabandi, Black, Sunayana Sitaram, Khyathi Raghavi Chandu, Sai Kr- ishna Rallabandi, and Alan W. Black. 2019. A survey of code-switched speech and language processing.
Overview for the first shared task on language identification in code-switched data. Thamar Solorio, Elizabeth Blair, Suraj Maharjan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Julia Alghamdi, Fahadand Hirschberg, Alison Chang, Proceedings of the First Workshop on Computational Approaches to Code Switching. the First Workshop on Computational Approaches to Code SwitchingThamar Solorio, Elizabeth Blair, Suraj Mahar- jan, Steven Bethard, Mona Diab, Mahmoud Ghoneim, Abdelati Hawwari, Julia AlGhamdi, Fa- hadand Hirschberg, Alison Chang, et al. 2014. Overview for the first shared task on language iden- tification in code-switched data. In Proceedings of the First Workshop on Computational Approaches to Code Switching, pages 62-72.
Thamar Solorio, Shuguang Chen, Alan W Black, Mona Diab, Sunayana Sitaram, Victor Soto, 2021. Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching. Association for Computational Linguistics. Emre Yilmaz, and Anirudh SrinivasanOnlineThamar Solorio, Shuguang Chen, Alan W. Black, Mona Diab, Sunayana Sitaram, Victor Soto, Emre Yilmaz, and Anirudh Srinivasan, editors. 2021. Proceedings of the Fifth Workshop on Computational Approaches to Linguistic Code-Switching. Association for Com- putational Linguistics, Online.
Learning to predict code-switching points. Thamar Solorio, Yang Liu, Proceedings of the 2008 Conference on Empirical Methods in Natural Language Processing. the 2008 Conference on Empirical Methods in Natural Language ProcessingHonolulu, HawaiiAssociation for Computational LinguisticsThamar Solorio and Yang Liu. 2008. Learning to predict code-switching points. In Proceedings of the 2008 Conference on Empirical Methods in Natural Lan- guage Processing, pages 973-981, Honolulu, Hawaii. Association for Computational Linguistics.
The role of cognate words, POS tags and entrainment in code-switching. Victor Soto, Nishmar Cestero, Julia Hirschberg, 10.21437/Interspeech.2018-1099Proceedings of Interspeech 2018, the 19th Annual Conference of the International Speech Communication Association. Interspeech 2018, the 19th Annual Conference of the International Speech Communication AssociationISCAVictor Soto, Nishmar Cestero, and Julia Hirschberg. 2018. The role of cognate words, POS tags and entrainment in code-switching. In Proceedings of Interspeech 2018, the 19th Annual Conference of the International Speech Communication Association, pages 1938-1942. ISCA.
Improving code-switched language modeling performance using cognate features. Victor Soto, Julia Hirschberg, 10.21437/Interspeech.2019-2681Proceedings of Interspeech 2019, the 20th Annual Conference of the International Speech Communication Association. Interspeech 2019, the 20th Annual Conference of the International Speech Communication AssociationISCAVictor Soto and Julia Hirschberg. 2019. Improving code-switched language modeling performance us- ing cognate features. In Proceedings of Interspeech 2019, the 20th Annual Conference of the Interna- tional Speech Communication Association, pages 3725-3729. ISCA.
Arabizi identification in Twitter data. Taha Tobaili, 10.18653/v1/P16-3008Proceedings of the ACL 2016 Student Research Workshop. the ACL 2016 Student Research WorkshopBerlin, GermanyAssociation for Computational LinguisticsTaha Tobaili. 2016. Arabizi identification in Twitter data. In Proceedings of the ACL 2016 Student Re- search Workshop, pages 51-57, Berlin, Germany. Association for Computational Linguistics.
The interplay between language similarity and script on a novel multi-layer Algerian dialect corpus. Samia Touileb, Jeremy Barnes, 10.18653/v1/2021.findings-acl.324Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021. Online. Association for Computational LinguisticsSamia Touileb and Jeremy Barnes. 2021. The interplay between language similarity and script on a novel multi-layer Algerian dialect corpus. In Findings of the Association for Computational Linguistics: ACL- IJCNLP 2021, pages 3700-3712, Online. Associa- tion for Computational Linguistics.
Emna Souissi, and Ahmed Ferchichi. 2020. A deep learning approach for the Romanized Tunisian dialect identification. Jihene Younes, Hadhemi Achour, The International Arab Journal of Information Technology. 176Jihene Younes, Hadhemi Achour, Emna Souissi, and Ahmed Ferchichi. 2020. A deep learning approach for the Romanized Tunisian dialect identification. The International Arab Journal of Information Tech- nology, 17(6):935-946.
Emna Souissi, and Ahmed Ferchichi. 2022. Romanized Tunisian dialect transliteration using sequence labelling techniques. Jihene Younes, Hadhemi Achour, 10.1016/j.jksuci.2020.03.008Jihene Younes, Hadhemi Achour, Emna Souissi, and Ahmed Ferchichi. 2022. Romanized Tunisian dialect transliteration using sequence labelling techniques.
. J. King Saud Univ. Comput. Inf. Sci. 343J. King Saud Univ. Comput. Inf. Sci., 34(3):982-992. |
16,304,968 | On the Discourse Analysis in Korean Dialogues | The purpose of the paper is twofold. First, we revise the well-known Centering Theory of anaphora resolution and propose the Controlled Information Packaging Theory (CIPT, for short). Second, we suggest a solution to the resolution of the antecedents of pronouns within the framework of CIPT. For this purpose, we select a dialogue of hotel reservation as a domain-restricted discourse, and discuss the characteristics of the distribution of pronouns. We suggest that we need to place the Slot-Link element on the top of the forward centering list. We claim that we need to establish a constraint on conceptual compatibility. As for the pronouns in the main dialogue, we propose a constraint of discourse command (d-command). | [
200618,
6644980,
1141127
] | On the Discourse Analysis in Korean Dialogues
Ik-Hwan Lee :ihlee@yonsei.ac.kr
Dept. of English
Dept. of German
Yonsei University
120-749Seoul
Minhaeng Lee
Yonsei University
120-749Seoul
On the Discourse Analysis in Korean Dialogues
The purpose of the paper is twofold. First, we revise the well-known Centering Theory of anaphora resolution and propose the Controlled Information Packaging Theory (CIPT, for short). Second, we suggest a solution to the resolution of the antecedents of pronouns within the framework of CIPT. For this purpose, we select a dialogue of hotel reservation as a domain-restricted discourse, and discuss the characteristics of the distribution of pronouns. We suggest that we need to place the Slot-Link element on the top of the forward centering list. We claim that we need to establish a constraint on conceptual compatibility. As for the pronouns in the main dialogue, we propose a constraint of discourse command (d-command).
INTRODUCTION
In Korean, the zero anaphora is very common in a domain restricted dialogue such as the one found in the situation of hotel reservation as follows':
(1) U 1 :
iss e-yo? exist (Is there a room free?) U2:
nalcca encey-sip-nikka? date when (For what date are you going to make a reservation?) U3:
onul cenyek-ey. today night (I'd like to make a reservation for tonight)
[U = Utterance, = zero pro-form]
In the above example, a long discussed issue is how to establish the antecedent of the zero anaphors. In this study we propose a reasonable and reliable solution to the problem.
The following five types of information structures are assumed in CIPT:
(2) a. Link -Tail -Focus structure (L-T-F structure) b. Link-Focus structure (L-F structure) c. Tail -Focus structure (T-F structure) d. Focus structure (F structure) e. Slot Link -Focus structure (SL-F structure)
The SL-F structure is the one defined by Lee & Lee(1998), in addition to the original information packaging theory of Vallduvi (1994). We adopt the concept of the frame theory devised in the Artificial Intelligence community.
We thank Professor Jungyun Seo of Sogang University and Professor Hyunho Lee of Tongyang Technical College for allowing us to use the corpus they constructed for the Soft Science Project.
In this paper we claim that the sentences with zero anaphors tend to exhibit the SL-F structure, on the basis of empirical evidence from actual dialogue corpora found in situations such as hotel reservation, theater talk, etc. As a next step we propose a revised ranking of the forward-looking centers in the sense of centering theory. It is claimed that the componential status of the information structure of the relevant utterance is revealed in the form of a hierarchy as follows:
(3) SL-component > Speaker, Hearer > Subject > Indirect Object > Direct Object > Others
With this hierarchy, we can calculate the reference of zero anaphora in any form of domain restricted dialogues.
As for the overt anaphor, H. Lee(1998) postulates a constraint for the recovery of its antecedent at the moment when a sentence is uttered after returning from a sub-dialogue. He observes that an overt pronoun must have its antecedent in the sub-dialogue when it appears in the first utterance immediately after the sub-dialogue. Look at the example in (4 In H. Lee's(1998) analysis, the overt anaphor kuri 'there' in the utterance U5 has its antecedent Naksan in the previous sub-dialogue(namely, U2(S1)). We, however, claim that the proposed analysis is not convincing because the same antecedent can also be found in the utterance Ul, which is in the main dialogue.
In this paper we show that H. Lee's hypothesis is not correct and we propose a general constraint on the interpretation of the overt anaphor, on the basis of the analysis of the realistic corpus. The constraint is stated as follows: (5) The overt anaphor has its antecedent in the discourse segment of the same or higher level.
INFORMATION PACKAGING THEORY
In (2) above we mentioned five types of dialogue structures. We now discuss the ideas using Vallduvi's(1994) examples. Let us first examine the Link-Tail-Focus structure depicted in (2a). Examine the dialogue in (6). In a dialogue such as (6), when the sentence "The president hates the Delft china set" is uttered, only the verb 'hates' becomes the focus. The phrase 'the president' is a link component and 'the Delft china set' a tail component. Accordingly, the cognitive processing will go on as in (7). If the same sentence is uttered in a different context, the information structure will be different as shown in (8). In this case, 'the president' is a link component and 'hates the Delft china set' becomes the focus component. Here, in the cognitive process, the first step is to look up the information card of the noun phrase 'the president'. Then we are supposed to add the information 'hates the Delft china set' to the card.
In the example in (9) we see that no explicit link component appears. Here only 'hates' becomes the focus component, and the noun phrase 'the Delft china set' functions as the tail component. We do not have the link component 'the president'. In this case, we assume that the information card for 'the president' has been activated and continues to be in the activated state. In the card we replace any previous information related to the relation between the president and the Delft china set with 'hates'.
Let us now examine a situation where the example (8) is uttered in a different context as in (10). Here the whole verb phrase 'hates the Delft china set' is the focus component. This information is added to the activated card of 'the president'.
CONTROLLED INFORMATION PACKAGING THEORY(CIPT)
In this section, we discuss the two characteristics of the Controlled Information Packaging Theory(CIPT, for short). The CIPT is distinguished from Vallduvi's Information Packaging Theory in two respects.
First, in our CIPT we postulate the fifth SL-F structure. Vallduvi(1994: 16) discusses dialogues like 2 The pronoun 'he' is not overtly pronounced. This is just to show the place where 'the president' is assumed to appear. the one given in (11).
(11) a. A: Why don't you go to the theater more often? b. B: TICKETS are expensive.
He notes that the sentence in (1 lb) is not about any particular referent. He observes that in this case no particular focus of update is designated. He suggests that a salient general temporary situation file card be used to record the new information. This sentence is sometimes termed to be reporting a situation.
If we look at the situation closely, however, we can clearly see that the noun phrase 'tickets' in (11 b) is referentially related to the noun phrase 'the theater' in (11a). If we use the notion of frame suggested by Minsky(1975) to represent our cognitive knowledge of the actual world, we can naturally relate 'tickets' to 'the theater'. Minsky assumes that our knowledge about the world is represented in terms of frames, each of which in turn consists of many slots. The theater provides us a frame of world knowledge and the noun phrase 'tickets' fills in one of the slots.
The idea can be represented as in (12).
+ + + + + + + S(slot) 1 S(slot) 2 S(slot) 3 [Ex. TICKETS]
In this frame and slot analysis, we can say that when (11a) is uttered, the information card of 'the theater' is activated in the cognitive structure of the hearer, and the noun phrase 'tickets' can be triggered by this activation, which is exemplified in [ ] in (12) By introducing this idea of frame and slot representation, we extend Vallduvi's theory and postulate the fifth information structure, namely Slot Link-Focus structure. We now analyze (11 b) as in (13). As shown in (13) we treat the noun phrase in (1 lb) as a kind of link component. We now introduce a new notion of Hyper-link. The new information 'is expensive' is not directly linked to the noun phrase 'the theater' in (11a). We assume there to be a hyper-link between 'the theater' and 'tickets' by making an additional information card. The information conveyed by the verb phrase 'is expensive' is indirectly linked to the theater through this hyper-linking card.
The new Slot Link-Focus device can naturally explain the so-called bridging phenomena discussed by I.-H. Lee(1994). Look at the examples in (14). (14) a. John entered a large dining room.
b. The chandelier hung by an imported gold chain.
The noun phrase 'a large dining room' in (14a) need to be related in some way to the noun phrase 'the chandelier' in (14b). This referential relation can be properly captured by the hyper-link structure, which may be represented by the sentence in (15). The sentence in (15) bridges (14a) to (14b). We see that Vallduvi's original information packaging theory cannot appropriately handle examples like (11) and (14). We see that our extended information packaging theory, including the Slot-Link Focus structure, can provide a proper account of the data in question.
Second, our CIPT assumes a center controlling file card that includes the informations about the • • discourse structure and ordinary file cards. A center controlling card is assumed to have the structure depicted in (16). With the center controlling card, we also have to assume that the ordinary file card must have the information about the discourse level to which it belongs. Accordingly, we assume that an ordinary file card has the structure given in (18). As shown in (20b), the zero anaphor in (19b) is interpreted as having Tokkocwun as its antecedent, because Tokkocwun is the backward center in (20b), namely in (19b). Now, let's examine a dialogue for hotel reservation! [U = Utterance, G=Guest, H=Hotel Os/ 0o= zero pro-form]
In the above dialogue, we see frequent appearance of null anaphor. If we recover the antecedent of each of the null anaphor, we obtain the following.
(22) Ul Os empty room U3 Os = dates of stay U5 Os = dates of stay U6 Os = reservation U10 Os= hearer Ull Os = speaker U14 Os = speaker, Os = hearer
The antecedents of the null anaphors in the above dialogue are related to the hotel reservation. Thus, viewing from the notion of frame, we can say that the hotel reservation frame is activated and that such slots as 'empty room,' dates of stay,' 'reservation' are also activated in the frame. In this way the antecedents of the null anaphors are interpreted. Accordingly, we claim that the fifth utterance U5: Os onul cenyek 'Os tonight' has the following information structure.
(23) [Os] SL [onul cenyek] F
In this way, the null anaphors appearing in a restricted dialogue such as a Hotel reservation dialogue show the Slot Link-Focus structure. Thus, we propose a revision of the centering forward information structure as shown in (24), so that the notion of information structure is included in the centering theory, (25) The constraint on conceptual compatibility Every individual which is not explicitly expressed must be interpreted in terms of the explicit expression which is conceptually compatible.
This constraint is supported by the expressions used as slots with the specific predicates in the frame of hotel reservation. The relationship between slots and predicates may be arranged as in (26).
(26) empty room :: isseyo? 'have?' issupnita 'have' , eosupnita 'have no°r ate : : elmayo? 'how much' period ilpak 'one night' , ipak 'two nights' date :: onul 'today', nayil 'tomorrow'
The constraint makes it possible to select the most appropriate candidate for a null pronoun.
MAIN DIALOGUE AND SUB-DIALOGUES
In general, a dialogue consists of a series of utterances. Some of the utterances may constitute a subdialogue, which may cause a pause in the stream of the main dialogue, as shown in (4) With this dialogue, H. Lee(1998) claims that the antecedent of the pro-form kuli 'there' must be searched in the immediately preceding sub-dialogue. We see this claim is too strong, if not incorrect.
Let us examine another discourse in (28), which we adopt adopted from a TV talk show.
(28) Ul: kulayse incey ey cohci nayka kulay hanta hay kaciko hay therefore well oh good me so try determined Pollanikka talun kenun casinissnuntey swuhak-i mwunceyeyyo. be_willing_to other thing convinced mathematics-NOM problem (Therefore, I determined that I would try to do that. But the mathmatics was a problem. </Sub-dialogue2> </Sub-dialoguel> Ull: kuke _nun mollay kamchwenohko incey kukel pomyense cakkwu it_TOP secretly hide now it learn often ponikka incey kuken swuipkey toytelakwuyo. learn now it easy got (I learned the book, hiding the book secretly and then I could easily understand the contents, because I often learned it.) U12: yey, yey. so (It was so.) U13: kwukminhakkyo kekinun kumpang tetume ponikka toyko, Primary school that_TOP soon turn fumble_in reach (I could early reach some goal through just fumbling in the book on the primary school level. U14: kulehkeyhayse incey hakwenul tunglokul hakey toyn kecyo Therefore now private institute enroll PERF mollay. secretely (So, I've secretely enrolled in a private institute.)
In the above dialogue, we see a complex dialogue which include a sub-dialogue, which in turn has another sub-dialogue. Here we call attention to the pronoun kuke 'it' in Ul 1 . How can we establish the antecedent of this pronoun? If we follow H. Lee's theory, we have to search the antecedent in the immediately preceding sub-dialogue. In the preceding sub-dialogue, however, we do not see the phrase cenkwa 4 haknyen 'reference book 4th grade'. The antecedent of the pronoun kuke 'it' in U 11 cannot be found in the sub-dialogues. We see the antecedent cenkwa 4 haknyen 'reference book 4th grade' in the U3, which is the utterance just before the first sub-dialogue. This shows that the antecedent of the pronoun is not necessarily found in the immediately preceding sub-dialogue. This fact proves that H. Lee's claim is not correct.
Considering the search of the antecedent of pronouns appearing in the global dialogue, as an alternative to H. Lee's(1998) theory of subdialogue we propose the discourse command constraint as in (29) (29) Discourse command constraint In a discourse the antecedent of a pronoun must be able to discourse command the pronoun.
The discourse command (d-command) is defined as in (30).
(30) Discourse command In a discourse an expression A discourse commands an expression B if one of the following is satisfied: (i) A and B belong to the same level of the dialogue. (ii) B belongs to the level of dialogue lower than the level of dialogue to which A belongs.
( 7
7) a. Look up the information card of 'the president'. b. Replace any previous information concerning the relation between the president and the Delft china set with the new information 'HATES'. (Information updating)
( 8
8) a. A: I'm arranging things for the president's dinner. Anything I should know? b. B: Yes. [L The president] [F hates the Delft CHINA SET].
( 9 )
9a. A: In the Netherlands I got the president a big Delft china tray that matches the set he has in the living room. Was that a good idea? b. B: No. [F (He) HATES] [T the Delft china set])2.
(
10) a. A: I'm arranging things for the president's dinner. Anything I should know? b. B: Yes. The president always uses plastic dishes. [ F (He) hates the Delft CHINA SET].
( 12 )
12Structure of the 'Frame and Slots' F(frame) [Ex. THEATER]
(
13) [ SL TICKETS] [F are expensive].
( 15 )
15The large dining room had a chandelier.
with the center controlling card of the immediately higher level Hyper link with the center controlling card of the immediately lower levelAn example of the center controlling card is shown in(17).
of the center controlling card enables us to deal with the anaphor in the global discourses. Detailed examples will be discussed in Section 5 below.4. ZERO ANAPHORIn a series of utterances, there is a list of items, each of which may become the center of the dialogue(Walker & Prince 1997). According toChoe & Lee(1999), this notion of center is useful in establishing the antecedent of zero anaphor in Korean. Let us examine the discourse in (19).
un ssuki lul memchwu essta. tokkocwun TP writing AccP stop Pst DP (Tokkochwun stopped writing.) b. han kay namun kamca lul cipese ip ey ne essta. one remained potato AccP pick mouth put in ( picked up one remained potato and put it in the mouth.) c. son ul ppetese pyekcangmun ul yenta. hand stretch closet door open ( stretched out (his) hand and open the closet door.) d. wi alayy twu khan ulo nanwiecin pyekcang an un tachaylopta. up down two part divided closet in colorful (The inside divided into two parts is colorful.) This series of utterances may produce the centers given in (20). Here Cb means the backward center -similar to the traditional notion of Topic -which may function as the antecedent of the zero/explicit anaphor, while Cf means the list of forward-looking centers. (20) a, Cb = [?] Cf = [Tokkocwun] b. Cb = Tokkocwun Cf [Tokkocwun, kamca 'potato', ip 'mouth] c. Cb = Tokkocwun Cf = [Tokkocwun, son 'hand', pyekcangmun 'closet door'] d. Cb = (?) pyekcang 'closet' Cf = [pyekcang 'closet']
point of view, the slots in the frame of hotel reservation, we will have 'empty room,' 'rate,' 'staying days,' 'reservation,', etc. We need to have a process of deciding the proper antecedent of the null anaphor in the dialogue (e.g., U5). The relevant constraint for the decision is postulated as in (25), following KookChung et al.(1998).
So, I firstly glanced the reference books and found out that I should learn the mathmatics from the 4th grade.))
U2: um.
well
(Well.)
U3: kulay incey chengkyeychen ke ka kaciko cenkwa
so well Chengkyeychen go PERF reference book
4 haknyenccalipwuthe chem hwulthenaylyeka ponikka
A th
4 grade_from
at first glance
EXP
4 haknyenccalipwuthe pwaya toykeysstelakwuyo.
4th grade_from
learn
find out
( <Sub-dialoguel>
U4: kwukminhakkyo 4 haknyen?
the primary school 4 th grade
(Do you mean the 4th grade of the primary school?)
U5: yey.
Yes
(Yes.)
U6: yey.
SO
(It's so)
<Sub-dialogue2>
U7: pwunswu nanwuki ilen
fraction division these
(The subjects were those like the fraction and divison.)
U8: [@-@]
U9: kuke to icepelyessunikkan
those all forgot_because
(Because I forgot all the mathematical knowledges.)
U10: yey.
Yes
(I see.)
According to the discourse command constraint, the antecedent of a pronoun must be sought in the same or higher level of dialogue. as discussed above, the antecedent of the pronoun kuke 'it' in the utterance Ulf in (28) is in the same level of discourse, not in the sub-dialogue. As for the pro-form kuli 'there' of U5 in(27), its antecedent appears in the sub-dialogue U2. Thus, this phenomenon seems to support the theory of sub-dialogue. But the antecedent also appears in Ul which uttered before the start of the sub-dialogue. Notice that Ul and U5 are in the same level of dialogue. Therefore, this case observes the discourse command constraint.Let us now see how the discourse command constraint is incorporated in the Controlled Information Packaging theory (CIPT). Let us examine an example. In this dialogue the pronoun kukes 'it' in U9 has an event, namely the event of making the car lane of about 300 or 400 meter long, as its antecedent. This event is one of centers activated by the utterance U5, because an event may be considered to be one of the centers in the forward centering list. The noun chasen 'car lane' in U6 is a backward center only in the sub-dialogue. Thus, we have to search the antecedent of the pronoun kukes 'it' in U9 in U5 which belongs to the same level of dialogue. In this case, the event itself is the antecedent. Thus, it cannot be found in the sub-dialogue U6 through U8. This can be predicted by the discourse command constraint.CONCLUSIONThis paper discussed the searching mechanism of antecedent of pronouns in Korean dialogue. We discussed the characteristics of zero pronouns appearing in a restricted dialogue of hotel reservation. In this case we claimed that, viewing from the information structure, the Slot-Link element is the possible antecedent of the null pronoun and that it must be place on the highest position in the forward-looking centers list in the centering theory. We suggested the constraint on conceptual compatibility for selection of appropriate antecedent out of many possible ones. Concerning search of antecedent of pronouns in a global dialogue, we introduced a center controlling card to account for the anaphoric relation induced by the hierarchical structure of the global dialogue and the sub-dialogue. On the basis of the levels we postulated the general discourse command(d-command) constraint to the effect that the antecedent must discourse command its pronoun.
A Centering Approach to Pronoun. Brennan, Proceedings of the 25th Annual Meeting of the ACL. the 25th Annual Meeting of the ACLStanfordBrennan et. al. (1987). "A Centering Approach to Pronoun," In Proceedings of the 25th Annual Meeting of the ACL, Stanford, 155-162.
Jae-Woong & Minhaeng Choe, Lee, Formal Semantics and Descriptions of Korean. Seoul: Hanshin. Choe, Jae-Woong & Minhaeng Lee (1999). "Focus," in Formal Semantics and Descriptions of Korean. Seoul: Hanshin, 157-205.
A Study of Korean Prosody and Discourse For the Development of Speech Synthesis/Recognition System. K Chung, Korea Telecom Research & Development Groupin KoreanChung, K. (1998). A Study of Korean Prosody and Discourse For the Development of Speech Synthesis/Recognition System. (in Korean) Korea Telecom Research & Development Group.
Centering: A Framework for Modeling the Local Coherence of Discourse. Grosz, In Computational Linguistics. 212Grosz et. al. (1995). "Centering: A Framework for Modeling the Local Coherence of Discourse," In Computational Linguistics 21(2), 203-225.
Zero Anaphora: the Case of Japanese. M Kameyama, Stanford University Ph.D. DissKameyama, M. (1985). Zero Anaphora: the Case of Japanese. Stanford University Ph.D. Diss.
Functional Sentence Perspective. Susumu Kuno, Linguistic Inquiry. 3Kuno, Susumu. (1972). "Functional Sentence Perspective," Linguistic Inquiry 3, 269-320.
A Study of Korean Sub-dialogues. Hyonho Lee, Korean Journal of Cognitive Science. 9Lee, Hyonho. (1998). "A Study of Korean Sub-dialogues," Korean Journal of Cognitive Science 9.3, 47-59.
Bridging Situations and NPI Licensing. Lee, Ik-Hwan, Situation Theory and its Applications. J. Seligman and D. WesterstahlStanford, CalifCSLI PublicationsLee, Ik-Hwan (1994). "Bridging Situations and NPI Licensing," in J. Seligman and D. Westerstahl, eds., Situation Theory and its Applications. Stanford, Calif.: CSLI Publications.
A Cognitive Model for the Interpretation of the Referential Expressions and Information Structure. Ik-Hwan & Minhaeng Lee, Lee, Korean) Korean Journal of Linguistics. 23Lee, Ik-Hwan & Minhaeng Lee (1998). "A Cognitive Model for the Interpretation of the Referential Expressions and Information Structure," (in Korean) Korean Journal of Linguistics 23.1, 65-85.
On the Anaphora Resolution in Korean Dialogues. Ik-Hwan & Minhaeng Lee, Lee, Harvard International Conference on Korean Linguistics. Lee, Ik-Hwan & Minhaeng Lee (1999). "On the Anaphora Resolution in Korean Dialogues," Harvard International Conference on Korean Linguistics.
A Framework for Representing Knowledge. M L Minsky, P. H. WinstonMcGraw-HillNew YorkThe Psychology of Computer VisionMinsky, M. L. (1975). "A Framework for Representing Knowledge," in P. H. Winston, ed. The Psychology of Computer Vision. New York: McGraw-Hill.
Pars Talk about sentence-and text-level anaphora. M / U Strube, Hahn, Proc. of EACL-95. of EACL-95Strube, M. / U. Hahn (1995). " Pars Talk about sentence-and text-level anaphora," in Proc. of EACL- 95, 270-277.
Functional Centering. M / U Strube, Hahn, ACL '96. Strube, M. / U. Hahn (1996). "Functional Centering," in ACL '96, 270-277.
Integrating Information Structure into Constraint-Based Categorial Approaches. E Vallduvi, E. EngdahlHCRC PublicationsUniversity of EdinburghThe Dynamics of Information PackagingVallduvi, E. (1994). "The Dynamics of Information Packaging," in: E. Engdahl, ed., Integrating Information Structure into Constraint-Based Categorial Approaches, 4-26. HCRC Publications, University of Edinburgh.
Japanese discourse and the process of centering. M Walker, M Iida, & S Cote, Computational Linguistics. 21Walker, M., M. Iida, & S. Cote (1994). "Japanese discourse and the process of centering," Computational Linguistics 21, 1-38.
A Bilateral Approach to Givenness: A Hearer-status Algorithm and a Centering Algorithm. M Walker, Prince, Ms. University of PennsylvaniaWalker. M. & A Prince. (1997). "A Bilateral Approach to Givenness: A Hearer-status Algorithm and a Centering Algorithm," Ms. University of Pennsylvania.
. Yongkyun, Linguistics. 29A centering approach to the [case][topic] restriction in KoreanYongkyun No (1991). "A centering approach to the [case][topic] restriction in Korean," Linguistics, 29, 653-668. |
7,650,388 | Annotating Archaeological Texts: An Example of Domain-Specific Annotation in the Humanities | Developing content extraction methods for Humanities domains raises a number of challenges, from the abundance of non-standard entity types to their complexity to the scarcity of data. Close collaboration with Humanities scholars is essential to address these challenges. We discuss an annotation schema for Archaeological texts developed in collaboration with domain experts. Its development required a number of iterations to make sure all the most important entity types were included, as well as addressing challenges including a domain-specific handling of temporal expressions, and the existence of many systematic types of ambiguity. | [
15146176,
16730978,
11494622
] | Annotating Archaeological Texts: An Example of Domain-Specific Annotation in the Humanities
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 2012. 2012
Francesca Bonin Scss
Clcs
Fabio Cavulli
Aronne Noriller
Massimo Poesio
Trinity College Dublin
Ireland
University of Trento
Italy
University of Trento
Italy
University of Essex
UK
Egon W. Stemle EURAC
University of Trento
Italy, Italy
Annotating Archaeological Texts: An Example of Domain-Specific Annotation in the Humanities
Proceedings of the 6th Linguistic Annotation Workshop
the 6th Linguistic Annotation WorkshopJeju, Republic of KoreaAssociation for Computational LinguisticsJuly 2012. 2012
Developing content extraction methods for Humanities domains raises a number of challenges, from the abundance of non-standard entity types to their complexity to the scarcity of data. Close collaboration with Humanities scholars is essential to address these challenges. We discuss an annotation schema for Archaeological texts developed in collaboration with domain experts. Its development required a number of iterations to make sure all the most important entity types were included, as well as addressing challenges including a domain-specific handling of temporal expressions, and the existence of many systematic types of ambiguity.
Introduction
Content extraction techniques -so far, mainly used to analyse news and scientific publications -will play an important role in digital libraries for the humanities as well: for instance, certain types of browsing that content extraction is meant to support, such as entity, spatial and temporal browsing, could sensibly improve the quality of repositories and their browsing. However, applying content extraction to the Humanities requires addressing a number of problems: first of all, the lack of large quantities of data; then, the fact that entities in these domains, additionally to adhering to well established standards, also include very domain-specific ones.
Archaeological texts are a very good example of the challenges inherent in humanities domains, and at the same time, they deepen the understanding of possible improvements content extraction yields for these domains. For instance, archaeological texts could benefit of temporal browsing on the basis of the temporal metadata extracted from the content of the publication (as opposed to temporal browsing based on the date of publication), more than biological publications or general news. In this paper, we discuss the development of a new annotation schema: it has been designed specifically for use in the archaeology domain to support spatial and temporal browsing. To our knowledge this schema is one of only a very few schemata for the annotation of archaeological texts (Byrne et al., 2010), and Humanities domains in general (Martinez-Carrillo et al., 2012) (Agosti and Orio, 2011). The paper is structured as follows. In Section 2 we give a brief description of the corpus and the framework in which the annotation has been developed; in Section 3, we describe a first annotation schema, analysing its performance and its weaknesses; in Section 4 we propose a revised version of the annotation schema, building upon the first experience and, in Section 5, we evaluate the performance of the new schema, describing a pilot annotation test and the results of the inter-annotator agreement evaluation.
Framework and Corpus Description
The annotation process at hand takes place in the framework of the development of the Portale della Ricerca Umanistica / Humanities Research Portal (PRU), (Poesio et al., 2011a), a one-stop search facility for repositories of research articles and other types of publications in the Humanities. The portal uses content extraction techniques for extract-ing, from the uploaded publications, citations and metadata, together with temporal, spatial, and entity references (Poesio et al., 2011b). It provides access to the Archaeological articles in the APSAT / ALPINET repository, and therefore, dedicated content extraction resources needed to be created, tuned on the specificities of the domain. The corpus of articles in the repository consists of a complete collection of the journal Preistoria Alpina published by the Museo Tridentino di Scienze Naturali. In order to make those articles accessible through the portal, they are tokenized, PoS tagged and Named Entity (NE) annotated by the TEXTPRO 1 pipeline (Pianta et al., 2008). The first version of the pipeline included the default TEXTPRO NE tagger, Enti-tyPro, trained to recognize the standard ACE entity types. However, the final version of the portal is based on an improved version of the NEtagger capable of recognising all relevant entities in the AP-SAT/ALPINET collection (Poesio et al., 2011b;Ekbal et al., 2012)
Annotation Schema for the Archaeological Domain
A close collaboration with the University of Trento's "B. Bagolini" Laboratory, resulted in the development of an annotation schema, particularly suited for the Archaeological domain, (Table 1). Differently from (Byrne et al., 2010), the work has been particularly focused on the definition of specific archaeological named entities, in order to create very fined grained description of the documents. In fact, we can distinguish two general types of entities: contextual entities, those that are part of the content of the article (as PERSONs, SITEs, CULTUREs, ARTEFACTs), and bibliographical entities, those that refer to bibliographical information (as PubYEARs, etc.) (Poesio et al., 2011a). In total, domain experts predefined 13 entities, and also added an underspecification tag for dealing with ambiguity. In fact, the archaeological domain is rich of polysemous cases: for instance, the term 'Fiorano' refers to a CULTURE, from the Ancient Neolithic, that takes its name from the SITE, 'Fiorano', which in turn is named from Fiorano Modenese; during the first annotation, those references were decided to be marked as underspecified.
Annotation with the First Annotation Schema and Error Analysis
A manual annotation, using the described schema, was carried out on a small subset of 11 articles of Preistoria Alpina (in English and Italian) and was used as training set for the NE tagger; the latter was trained with a novel active annotation technique (Vlachos, 2006), (Settles, 2009). Quality of the initial manual annotation was estimated using qualitative analyses for assessing the representativeness of the annotation schema, and quantitative analyses for measuring the inter-annotator agreement. Qualitative analyses revealed lack of specificity of the entity TIME and of the entity PERSON. In fact, the annotation schema only provided a general TIME entity used for marking historical periods (as Mesolitic, Neolithic) as well as specific dates (as 1200 A.D.) and proposed dates(as from 50-100 B.C.), although all these instances need to be clearly distinguished in the archaeological domain. Similarly, PERSON had been used for indicating general persons belonging to the document's contents and scientists working on the same topic (but not addressed as bibliographical references). For the inter-annotator agreement on the initial manual annotation, we calculated a kappa value of 0.8, which suggest a very good agreement. Finally, we carried out quantitative analyses of the This cross-error analysis revealed two main problems of the adopted annotation schema for Archaeological texts: 1) the lack of representativeness of the entity TIME and PERSON, used for marking concurrent concepts, 2) the accuracy problems due to the existence of underspecified entities.
A Revised Annotation Schema and Coding Instructions
Taking these analyses into consideration, we developed a new annotation schema (Table 2): the aforementioned problems of the previous section were solved and the first schema's results were outperformed in terms of accuracy and representativeness.
The main improvements of the schema are:
1. New TIME and PERSON entities 2. New decision trees, aimed at overcoming underspecification and helping annotators in ambiguous cases.
3. New domain specific NE such as: material 4. Fine grained specification of ECOFACT:
AninmalEcofact and BotanicEcofact.
Similarly to (Byrne, 2006), we defined more fine grained entities, in order to better represent the specificity of the domain; however, on the other hand, we also could find correlations with he CIDOC Conceptual Reference Model (Crofts et al., 2011). 2
TIME and PERSON Entities
Archaeological domain is characterized by a very interesting representation of time. Domain experts need to distinguish different kinds of TIME annotations. In some cases, C-14 analysis, on remains and artefacts, allow to detected very exact dating; those cases has been annotated as AbsTIME. On the other hand, there are cases in which different clues, given by the analysis of the settlements (technical skills, used materials, presence of particular species), allow archaeologists to detect a time frame of a possible dating. Those cases have been annotated as ProposedTime (eg. from 50-100 B.C). Finally, macro time period, such as Neolithic, Mesolithic, are annotated as HistoricalTIME: interestingly, those macro periods do not refer to an exact range of years, but their collocation in time depends on cultural and geographical factors.
Coding Schema for Underspecified Cases
In order to reduce ambiguity, and helping coders with underspecified cases, we developed the following decision trees: SITE vs LOCATION: coders are suggested to mark as LOCATION only those mentions that are clearly geographical references (eg. Mar Mediterraneo, Mediterranean Sea); SITE has to be used in all other cases (similar approach to the GPE markable in ACE); CULTURE vs TIME: a) coders are first asked to mark as HistoricalTIME those cases in which the mention belongs to a given list of macro period (such as Neolithic, Mesolithic):
• eg.: nelle societa' Neolitiche (in Neolithic societies).
b) If the modifier does not belong to that list, coders are asked to try an insertion test: della cultura + ADJ, (of the ADJ culture) :
• lo Spondylus e' un simbolo del Neolitico Danubiano = lo Spondylus e' un simbolo della cultura Neolitica Danubiana (the Spondylus is a symbol of the Danubian Neolithic = the Spondylus is a symbol of the Danubian Neolithic culture).
• la guerra fenicia != la guerra della cultura dei fenici (Phoenician war != war of the Phoenician culture).
Finally, cases in which tests a) and b) fail, coders are asked to mark and discuss the case individually.
Inter-Annotator Agreement and Evaluation
To evaluate the quality of the new annotation schema, we measured the inter-annotator agreement (IAA) achieved during a first pilot annotation of two articles from Preistoria Alpina. The IAA was calculated using the kappa metric applied on the entities detected by both annotators, and the new schema reached an overall agreement of 0.85. In Table 3 entities regarding coordinates and time seem also to be well defined and representative.
Conclusions
In this study, we discuss the annotation of a very specific and interesting domain namely, Archaeology: it deals with problems and challenges common to many other domains in the Humanities. We have described the development of a fine grained annotation schema, realized in close cooperation with domain experts in order to account for the domain's peculiarities, and to address its very specific needs. We propose the final annotation schema for annotation of texts in the archaeological domain. Further work will focus on the annotation of a larger amount of articles, and on the development of domain specific tools.
1 http://textpro.fbk.eu/NE type
Details
Culture
Artefact assemblage characterizing
a group of people in a specific time and place
Site
Place where the remains of human
activity are found (settlements, infrastructures)
Artefact
Objects created or modified by men
(tools, vessels, ornaments)
Ecofact
Biological and environmental remains
different from artefacts but culturally relevant
Feature
Remains of construction or maintenance
of an area related with dwelling activities
(fire places, post-holes, pits, channels, walls, ...)
Location
Geographical reference
Time
Historical periods
Organization Association (no publications)
Person
Human being discussed in the text (Otzi the
Iceman, Pliny the Elder, Caesar)
Pubauthor
Author in bibliographic references
Publoc
Publication location
Puborg
Publisher
Pubyear
Publication year
Table 1 :
1Annotation schema for Named Entities in the Archaeology Domain
Table 2 :
2New Annotation Schema for Named Entities in the Archaeology Domainautomatic annotation. Considering the specificity
of the domain the NE tagger reached high perfor-
mances, but low accuracy resulted on the domain
specific entities, such as SITE, CULTURE, TIME
(F-measures ranging from 34% to 70%) In particular
SITE, LOCATION, and CULTURE, TIME, turned
out to be mostly confused by the system. This result
may be explained by the existence of many polyse-
mous cases in the domain, that annotators used to
mark as underspecified.
, we report the results of the IAA for each NE class. Interestingly, we notice a significant increment on problematic classes on SITE and LOCATION, as well as on CULTURE.3 Annotators performed consistently demonstrating the reliability of the annotation schema. The newNE Type
Total
Kappa
Site
50
1.0
Location
13
0.76
Animalecofact
3
0.66
Botanicecofact
6
-0.01
Culture
4
1.0
Artefact
18
0.88
Material
11
0.35
Historicaltime
6
1.0
Proposedtime
0
NaN
Absolutetime
0
NaN
Pubauthor
48
0.95
Pubyear
32
1.0
Person
2
-0.003
Organization
7
0.85
Puborg
0
NaN
Feature
36
1.0
Publoc
2
-0.0038
Coordalt
0
NaN
Geosistem
0
NaN
Datum
2
1.0
Table 3 :
3IAA per NE type: we report the total number of NE and the kappa agreement.
The repertoire of entity types in the new annotation scheme overlaps in part with those in the CIDOC CRM: for instance, AbsTime and PubYears are subtypes of E50 (Date), Historical-Time is related to E4 (Period), Artefact to E22 (Man Made Object), etc.
Five classes are not represented by this pilot annotation test; however future studies will be carried out on a significantly larger amount of data.
AcknowledgmentsThis study is a follow up of the research supported by the LiveMemories project, funded by the Autonomous Province of Trento under the Major Projects 2006 research program, and it has been partially supported by the 'Innovation Bursary' program in Trinity College Dublin.
The cultura project: Cultivating understanding and research through adaptivity. M Agosti, N Orio, Digital Libraries and Archives. Maristella Agosti, Floriana Esposito, Carlo Meghini, and Nicola Orio249M. Agosti and N. Orio. 2011. The cultura project: Cultivating understanding and research through adap- tivity. In Maristella Agosti, Floriana Esposito, Carlo Meghini, and Nicola Orio, editors, Digital Libraries and Archives, volume 249 of Communications in Computer and Information Science, pages 111-114.
. Heidelberg Springer Berlin, Springer Berlin Heidelberg.
Intelligent information access from scientific papers. E J Briscoe, Current Challenges in Patent Information Retrieval. J. Tait et alSpringerE. J. Briscoe. 2011. Intelligent information access from scientific papers. In J. Tait et al, editor, Current Challenges in Patent Information Retrieval. Springer.
CoreLex: Systematic Polysemy and Underspecification. P Buitelaar, Brandeis UniversityPh.D. thesisP. Buitelaar. 1998. CoreLex: Systematic Polysemy and Underspecification. Ph.D. thesis, Brandeis University.
Automatic extraction of archaeological events from text. K Byrne, E Klein, Proceedings of Computer Applications and Quantitative Methods in Archaeology. Computer Applications and Quantitative Methods in ArchaeologyWilliamsburg, VAK. Byrne and E. Klein, 2010. Automatic extraction of archaeological events from text. In Proceedings of Computer Applications and Quantitative Methods in Archaeology, Williamsburg, VA
Proposed Annotation for Entities and Relations in RCAHMS Data. K Byrne, K. Byrne, 2006. Proposed Annotation for Entities and Relations in RCAHMS Data.
Definition of the CIDOC Conceptual Reference Model. N Crofts, M Doerr, T Gill, S Stead, M Stiff, ICOM/CIDOC CRM Special Interest Group. N. Crofts, M. Doerr, T. Gill, S. Stead, and M. Stiff. 2011. Definition of the CIDOC Conceptual Reference Model. ICOM/CIDOC CRM Special Interest Group, 2009.
Rapid Adaptation of NE Resolvers for Humanities Domains using Active Annotation. A Ekbal, F Bonin, S Saha, E Stemle, E Barbu, F Cavulli, C Girardi, M Poesio, Journal for Language Technology and Computational Linguistics. 26A. Ekbal, F. Bonin, S. Saha, E. Stemle, E. Barbu, F. Cavulli, C. Girardi, M. Poesio, 2012. Rapid Adaptation of NE Resolvers for Humanities Domains using Active Annotation. In Journal for Language Technology and Computational Linguistics (JLCL) 26 (2):39-51.
Formalising and specifying underquantification. A Herbelot, A Copestake, Proceedings of the Ninth International Conference on Computational Semantics, IWCS '11. the Ninth International Conference on Computational Semantics, IWCS '11Stroudsburg, PA, USAA. Herbelot and A. Copestake. 2011. Formalising and specifying underquantification. In Proceedings of the Ninth International Conference on Computational Semantics, IWCS '11, pages 165-174, Stroudsburg, PA, USA.
Underquantification: an application to mass terms. A Herbelot, A Copestake, Proceedings of Empirical, Theoretical and Computational Approaches to Countability in Natural Language. Empirical, Theoretical and Computational Approaches to Countability in Natural LanguageBochum, GermanyA. Herbelot and A. Copestake 2010. Underquantifica- tion: an application to mass terms. In Proceedings of Empirical, Theoretical and Computational Ap- proaches to Countability in Natural Language, Bochum, Germany, 2010.
I-CAB: the Italian Content Annotation Bank. B Magnini, E Pianta, C Girardi, M Negri, L Romano, M Speranza, V Bartalesi Lenzi, R Sprugnoli, B. Magnini, E. Pianta, C. Girardi, M. Negri, L. Romano, M. Speranza, V. Bartalesi Lenzi, and R. Sprugnoli. I-CAB: the Italian Content Annotation Bank: pages 963-968.
Computer tools for archaeological reference collections: The case of the ceramics of the iberian period from andalusia (Spain). In Multimedia for Cultural Heritage. A L Martinez Carrillo, A Ruiz, M J Lucena, J M Fuertes, Communications in Computer and Information Science. 247Costantino Grana and Rita CucchiaraA.L. Martinez Carrillo, A. Ruiz, M.J. Lucena, and J.M. Fuertes. 2012. Computer tools for archaeological reference collections: The case of the ceramics of the iberian period from andalusia (Spain). In Multimedia for Cultural Heritage, volume 247 of Communications in Computer and Information Science, Costantino Grana and Rita Cucchiara, editors, pages 51-62.
. Heidelberg Springer Berlin, Springer Berlin Heidelberg.
Making fine-grained and coarse-grained sense distinctions, both manually and automatically. M Palmer, H T Dang, C Fellbaum, Natural Language Engineering. 1302M. Palmer, H. T. Dang, and C. Fellbaum. 2007. Making fine-grained and coarse-grained sense distinctions, both manually and automatically. Natural Language Engineering, 13(02):137-163.
The textpro tool suite. E Pianta, C Girardi, R Zanoli, Proceedings of 6th LREC. 6th LRECMarrakechE. Pianta, C. Girardi, and R. Zanoli. 2008. The textpro tool suite. In Proceedings of 6th LREC, Marrakech.
Underspecification and Anaphora: Theoretical Issues and Preliminary Evidence. M Poesio, P Sturt, R Artstein, R Filik, Discourse Processes. 42M. Poesio, P. Sturt, R. Artstein, and R. Filik. 2006. Underspecification and Anaphora: Theoretical Issues and Preliminary Evidence. In Discourse Processes 42(2): 157-175, 2006.
The humanities research portal: Human language technology meets humanities publication repositories. M Poesio, E Barbu, F Bonin, F Cavulli, A Ekbal, C Girardi, F Nardelli, S Saha, E Stemle, Proceedings of Supporting Digital Humanitites (SDH). Supporting Digital Humanitites (SDH)CopenhagenM. Poesio, E. Barbu, F. Bonin, F. Cavulli, A. Ekbal, C. Girardi, F. Nardelli, S. Saha, and E. Stemle. 2011a. The humanities research portal: Human language technology meets humanities publication repositories. In Proceedings of Supporting Digital Humanitites (SDH), Copenhagen.
Structure-preserving pipelines for digital libraries. M Poesio, E Barbu, E Stemle, C Girardi, Proceedings of LaTeCH. LaTeCHPortland, ORM. Poesio, E. Barbu, E. Stemle, and C. Girardi. 2011b. Structure-preserving pipelines for digital libraries. In Proceedings of LaTeCH, Portland, OR.
The semantics of lexical underspecification. J Pustejovsky, Folia Linguistica. 323-4J. Pustejovsky. 1998. The semantics of lexical under- specification. Folia Linguistica, 32(3-4):323?348.
J Pustejovsky, M Verhagen, Semeval-2010 task 13 : Evaluating events, time expressions, and temporal relations. Computational Linguistics. J. Pustejovsky and M. Verhagen. 2010. Semeval-2010 task 13 : Evaluating events, time expressions, and temporal relations. Computational Linguistics, (June 2009):112-116.
Active learning literature survey. B Settles, 1648University of Wisconsin-MadisonComputer Sciences Technical ReportB. Settles. 2009. Active learning literature survey. Computer Sciences Technical Report 1648, University of Wisconsin-Madison.
Active annotation. A Vlachos, Proceedings EACL 2006 Workshop on Adaptive Text Extraction and Mining. EACL 2006 Workshop on Adaptive Text Extraction and MiningTrentoA. Vlachos. 2006. Active annotation. In Proceedings EACL 2006 Workshop on Adaptive Text Extraction and Mining, Trento. |
2,047,087 | N-gram-based SMT System Enhanced with Reordering Patterns | This work presents translation results for the three data sets made available in the shared task "Exploiting Parallel Texts for Statistical Machine Translation" of the HLT-NAACL 2006 Workshop on Statistical Machine Translation. All results presented were generated by using the Ngram-based statistical machine translation system which has been enhanced from the last year's evaluation with a tagged target language model (using Part-Of-Speech tags). For both Spanish-English translation directions and the English-to-French translation task, the baseline system allows for linguistically motivated sourceside reorderings. | [
8303276,
8884845,
1452591,
8863860,
7701908
] | N-gram-based SMT System Enhanced with Reordering Patterns
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJune 2006. 2006
Josep M Crego jmcrego@gps.tsc.upc.edu
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
Marta R Costa-Jussà
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
José B Mariño
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
Adrià De Gispert agispert@gps.tsc.upc.edu
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
Maxim Khalilov khalilov@gps.tsc.upc.edu
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
José A R Fonollosa
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
Patrik Lambert lambert@gps.tsc.upc.edu
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
Rafael E Banchs rbanchs@gps.tsc.upc.edu
Department of Signal Theory and Communications TALP Research Center (UPC)
08034BarcelonaSpain
N-gram-based SMT System Enhanced with Reordering Patterns
Proceedings of the Workshop on Statistical Machine Translation
the Workshop on Statistical Machine TranslationNew York CityAssociation for Computational LinguisticsJune 2006. 2006
This work presents translation results for the three data sets made available in the shared task "Exploiting Parallel Texts for Statistical Machine Translation" of the HLT-NAACL 2006 Workshop on Statistical Machine Translation. All results presented were generated by using the Ngram-based statistical machine translation system which has been enhanced from the last year's evaluation with a tagged target language model (using Part-Of-Speech tags). For both Spanish-English translation directions and the English-to-French translation task, the baseline system allows for linguistically motivated sourceside reorderings.
Introduction
The statistical machine translation approach used in this work implements a log-linear combination of feature functions along with a translation model which is based on bilingual n-grams (de Gispert and Mariño, 2002).
This translation model differs from the well known phrase-based translation approach (Koehn et al., 2003) in two basic issues: first, training data is monotonously segmented into bilingual units; and second, the model considers n-gram probabilities instead of relative frequencies. This translation approach is described in detail in .
For those translation tasks with Spanish or English as target language, an additional tagged (us-ing POS information) target language model is used. Additionally a reordering strategy that includes POS information is described and evaluated.
Translation results for all six translation directions proposed in the shared task are presented and discussed. Both translation directions are considered for the pairs: English-Spanish, English-French, and English-German.
The paper is structured as follows: Section 2 briefly outlines the baseline system. Section 3 describes in detail the implemented POS-based reordering strategy. Section 4 presents and discusses the shared task results and, finally, section 5 presents some conclusions and further work.
Baseline N-gram-based SMT System
As already mentioned, the translation model used here is based on bilingual n-grams. It actually constitutes a language model of bilingual units, referred to as tuples, which approximates the joint probability between source and target languages by using bilingual n-grams (de Gispert and Mariño, 2002).
Tuples are extracted from a word-to-word aligned corpus according to the following two constraints: first, tuple extraction should produce a monotonic segmentation of bilingual sentence pairs; and second, no smaller tuples can be extracted without violating the previous constraint. See (Crego et al., 2004) for further details.
For all experiments presented here, the translation model consisted of a 4-gram language model of tuples. In addition to this bilingual n-gram translation model, the baseline system implements a log linear combination of five feature functions.
These five additional models are:
• A target language model. 5-gram of the target side of the bilingual corpus.
• A word bonus. Based on the number of target words in the partial-translation hypothesis, to compensate the LM preference for short sentences.
• A Source-to-target lexicon model. Based on IBM Model 1 lexical parameters (Brown et al., 1993), providing a complementary probability for each tuple in the translation table. These parameters are obtained from source-to-target alignments.
• A Target-to-source lexicon model. Analogous to the previous feature, but obtained from target-to-source alignments.
• A Tagged (POS) target language model. This feature implements a 5-gram language model of target POS-tags. In this case, each translation unit carried the information of its target side POS-tags, though this is not used for translation model estimation (only in order to evaluate the target POS language model at decoding time). Due to the non-availability of POStaggers for French and German, it was not possible to incorporate this feature in all translation tasks considered, being only used for those translation tasks with Spanish and English as target languages.
The search engine for this translation system is described in and implements a beam-search strategy based on dynamic programming, taking into account all feature functions described above, along with the bilingual n-gram translation model. Monotone search is performed, including histogram and threshold pruning and hypothesis recombination.
An optimization tool, which is based on a downhill simplex method was developed and used for computing log-linear weights for each of the feature functions. This algorithm adjusts the weights so that a non-linear combination of BLEU and NIST scores is maximized over the development set for each of the six translation directions considered.
This baseline system is actually very similar to the system used for last year's shared task "Exploiting Parallel Texts for Statistical Machine Translation" of ACL'05 Workshop on Building and Using Parallel Texts: Data-Driven Machine Translation and Beyond , whose results are available at: http://www.statmt.org/wpt05/ mt-shared-task/. A more detailed description of the system can be found in (2005).
The tools used for POS-tagging were Freeling (Carreras et al., 2004) for Spanish and TnT (Brants, 2000) for English. All language models were estimated using the SRI language modeling toolkit. Word-to-word alignments were extracted with GIZA++. Improvements in word-toword alignments were achieved through verb group classification as described in (de Gispert, 2005).
Reordering Framework
In this section we outline the reordering framework used for the experiments . A highly constrained reordered search is performed by means of a set of reordering patterns (linguistically motivated rewrite patterns) which are used to extend the monotone search graph with additional arcs.
To extract patterns, we use the word-to-word alignments (the union of both alignment directions) and source-side POS tags. The main procedure consists of identifying all crossings produced in the word-to-word alignments. Once a crossing has been detected, its source POS tags and alignments are used to account for a new instance of pattern. The target side of a pattern (source-side positions after reordering), is computed using the original order of the target words to which the source words are aligned. See figure 1 for a clarifying example of pattern extraction.
The monotone search graph is extended with reorderings following the patterns found in training. The procedure identifies first the sequences of words in the input sentence that match any available pattern. Then, each of the matchings implies the addition of an arc into the search graph (encoding the reordering learnt in the pattern). However, this addition of a new arc is not performed if a translation unit with the same source-side words already exists in the training. Figure 2 shows an example of the procedure.
Figure 2: Three additional arcs have been added to the original monotone graph (bold arcs) given the reordering patterns found matching any of the source POS tags sequence.
Once the search graph is built, the decoder traverses the graph looking for the best translation. Hence, the winner hypothesis is computed using all the available information (the whole SMT models). The reordering strategy is additionally supported by a 5-gram language model of reordered source POS-tags. In training, POS-tags are reordered according with the extracted reordering patterns and word-to-word links. The resulting sequence of source POS-tags are used to train the n-gram LM.
Notice that this reordering framework has only been used for some translation tasks (Spanishto-English, English-to-Spanish and English-to-French). The reason is double: first, because we did not have available a French POS-tagger. Second, because the technique used to learn reorderings (detailed below) does not seem to apply for language pairs like German-English, because the agglutinative characteristic of German (words are formed by joining morphemes together). Table 1 shows the improvement of the original baseline system described in section 2 (base), enhanced using reordering graphs (+rgraph) and provided the tagged-source language model (+pos). The experiments in table 1 were not carried out over the official corpus of this shared task. The Spanish-English corpus of the TC-Star 2005 Evaluation was used. Due to the high similarities between both corpus (this shared task corpus consists of a subset of the whole corpus used in the TC-Star 2005 Evaluation), it makes sense to think that comparable results would be obtained.
It is worth mentioning that the official corpus of the shared task (HLT-NAACL 2006) was used when building and tuning the present shared task system.
Shared Task Results
The data provided for this shared task corresponds to a subset of the official transcriptions of the European Parliament Plenary Sessions. The development set used to tune the system consists of a subset (500 first sentences) of the official development set made available for the Shared Task. Table 2 presents the BLEU, NIST and mWER scores obtained for the development-test data set. The last column shows whether the target POS language model feature was used or not. Computed scores are case sensitive and compare to one reference translation. Tasks in bold were conducted allowing for the reordering framework. For Frenchto-English task, block reordering strategy was used, which is described in (Costa-jussà et al., 2006). As it can be seen, for the English-to-German task we did not use any of the previous enhancements. Important differences can be observed between the German-English and the rest of translation tasks. They result from the greater differences in word order present in this language pair (the German-English results are obtained under monotone decoding conditions). Also because the greater vocabulary of words of German, which increases sparseness in any task where German is envolved. As expected, differences in translation accuracy between Spanish-English and French-English are smaller.
Conclusions and Further Work
As it can be concluded from the presented results, although in principle some language pairs (Spanish-English-French) seem to have very little need for reorderings (due to their similar word order), the use of linguistically-based reorderings proves to be useful to improve translation accuracy.
Additional work is to be conducted to allow for reorderings when translating from/to German.
Figure 1 :
1Reordering patterns are extracted using word-to-word alignments. The generalization power is achieved through the POS tags. Three instances of different patterns are extracted using the sentences in the example.
Table 1 :
1BLEU, NIST and mWER scores (com-
puted using two reference translations) obtained for
both translation directions (Spanish-to-English and
English-to-Spanish).
Conf
BLEU NIST mWER
Spanish-to-English
base
55.23 10.69
34.40
+rgraph 55.59 10.70
34.23
+pos
56.39 10.75
33.75
English-to-Spanish
base
48.03
9.84
41.18
+rgraph 48.53
9.81
41.15
+pos
48.91
9.91
40.29
Table 2 :
2Translation resultsTask
BLEU NIST mWER tPOS
en → es 29.50
7.32
58.95
yes
es → en 30.29
7.51
57.72
yes
en → fr 30.23
7.40
59.76
no
fr → en 30.21
7.61
56.97
yes
en → de 17.40
5.61
71.18
no
de → en 23.78
6.70
65.83
yes
http://www.tc-star.org
AcknowledgmentsThis work was partly funded by the European Union under the integrated project TC-STAR 1 : Technology and Corpora for Speech to Speech Translation (IST-2002-FP6-506738) and the European Social Fund.
Statistical machine translation of euparl data by using bilingual n-grams. R E Banchs, J M Crego, A De Gispert, P Lambert, J B Mariño, Proc. of the ACL Workshop on Building and Using Parallel Texts (ACL'05/Wkshp). of the ACL Workshop on Building and Using Parallel Texts (ACL'05/Wkshp)R. E. Banchs, J. M. Crego, A. de Gispert, P. Lambert, and J. B. Mariño. 2005. Statistical machine translation of euparl data by using bilingual n-grams. Proc. of the ACL Workshop on Building and Using Parallel Texts (ACL'05/Wkshp), pages 67-72, June.
TnT -a statistical part-of-speech tagger. T Brants, Proc. of the Sixth Applied Natural Language Processing. of the Sixth Applied Natural Language essingSeattle, WAT. Brants. 2000. TnT -a statistical part-of-speech tag- ger. In Proc. of the Sixth Applied Natural Language Processing (ANLP-2000), Seattle, WA.
The mathematics of statistical machine translation. P Brown, S Della Pietra, V Della Pietra, R Mercer, Computational Linguistics. 192P. Brown, S. Della Pietra, V. Della Pietra, and R. Mercer. 1993. The mathematics of statistical machine transla- tion. Computational Linguistics, 19(2):263-311.
Freeling: An open-source suite of language analyzers. X Carreras, I Chao, L Padró, M Padró, 4th Int. Conf. on Language Resources and Evaluation, LREC'04. X. Carreras, I. Chao, L. Padró, and M. Padró. 2004. Freeling: An open-source suite of language analyzers. 4th Int. Conf. on Language Resources and Evaluation, LREC'04, May.
Talp phrase-based statistical translation system for european language pairs. M R Costa-Jussà, J M Crego, A De Gispert, P Lambert, M Khalilov, R Banchs, J B Mariño, J A R Fonollosa, Proc. of the HLT/NAACL Workshop on Statistical Machine Translation. of the HLT/NAACL Workshop on Statistical Machine TranslationM.R. Costa-jussà, J.M. Crego, A. de Gispert, P. Lam- bert, M. Khalilov, R. Banchs, J.B. Mariño, and J.A.R. Fonollosa. 2006. Talp phrase-based statistical transla- tion system for european language pairs. Proc. of the HLT/NAACL Workshop on Statistical Machine Trans- lation, June.
A reordering framework for statistical machine translation. J M Crego, J Mariño, Internal ReportJ. M. Crego and J. Mariño. 2006. A reordering frame- work for statistical machine translation. Internal Re- port.
Finitestate-based and phrase-based statistical machine translation. J M Crego, J Mariño, A De Gispert, Proc. of the 8th Int. Conf. on Spoken Language Processing, ICSLP'04. of the 8th Int. Conf. on Spoken Language essing, ICSLP'04J. M. Crego, J. Mariño, and A. de Gispert. 2004. Finite- state-based and phrase-based statistical machine trans- lation. Proc. of the 8th Int. Conf. on Spoken Language Processing, ICSLP'04, pages 37-40, October.
An ngrambased statistical machine translation decoder. J M Crego, J Mariño, A Gispert, Proc. of the 9th European Conference on Speech Communication and Technology, Interspeech'05. of the 9th European Conference on Speech Communication and Technology, Interspeech'05J. M. Crego, J. Mariño, and A. Gispert. 2005. An ngram- based statistical machine translation decoder. Proc. of the 9th European Conference on Speech Communica- tion and Technology, Interspeech'05, September.
Using X-grams for speech-to-speech translation. A De Gispert, J Mariño, Proc. of the 7th. of the 7thA. de Gispert and J. Mariño. 2002. Using X-grams for speech-to-speech translation. Proc. of the 7th
Int, Conf, on Spoken Language Processing, ICSLP'02. Int. Conf. on Spoken Language Processing, ICSLP'02, September.
Phrase linguistic classification and generalization for improving statistical machine translation. A De Gispert, Proc. of the ACL Student Research Workshop (ACL'05/SRW). of the ACL Student Research Workshop (ACL'05/SRW)A. de Gispert. 2005. Phrase linguistic classification and generalization for improving statistical machine trans- lation. Proc. of the ACL Student Research Workshop (ACL'05/SRW), June.
Statistical phrase-based translation. P Koehn, F J Och, D Marcu, Proc. of the Human Language Technology Conference. of the Human Language Technology ConferenceP. Koehn, F.J. Och, and D. Marcu. 2003. Statisti- cal phrase-based translation. Proc. of the Human Language Technology Conference, HLT-NAACL'2003, May.
Bilingual n-gram statistical machine translation. J B Mariño, J M Banchs, A Crego, P De Gispert, M R Lambert, J A R Costa-Jussà, Fonollosa, Proc. of the MT Summit X. of the MT Summit XJ.B. Mariño, R Banchs, J.M. Crego, A. de Gispert, P. Lambert, M. R. Costa-jussà, and J.A.R. Fonollosa. 2005. Bilingual n-gram statistical machine transla- tion. Proc. of the MT Summit X, September. |
11,658,861 | Automated Essay Scoring by Maximizing Human-machine Agreement | Previous approaches for automated essay scoring (AES) learn a rating model by minimizing either the classification, regression, or pairwise classification loss, depending on the learning algorithm used. In this paper, we argue that the current AES systems can be further improved by taking into account the agreement between human and machine raters. To this end, we propose a rankbased approach that utilizes listwise learning to rank algorithms for learning a rating model, where the agreement between the human and machine raters is directly incorporated into the loss function. Various linguistic and statistical features are utilized to facilitate the learning algorithms. Experiments on the publicly available English essay dataset, Automated Student Assessment Prize (ASAP), show that our proposed approach outperforms the state-of-the-art algorithms, and achieves performance comparable to professional human raters, which suggests the effectiveness of our proposed method for automated essay scoring. | [
13475584,
1499545,
6441666,
10894148
] | Automated Essay Scoring by Maximizing Human-machine Agreement
Association for Computational LinguisticsCopyright Association for Computational Linguistics18-21 October 2013. 2013
Hongbo Chen chenhongbo11@mails.ucas.ac.cn
School of Computer and Control Engineering
University of Chinese Academy of Sciences
100190BeijingChina
Ben He benhe@ucas.ac.cn
School of Computer and Control Engineering
University of Chinese Academy of Sciences
100190BeijingChina
Automated Essay Scoring by Maximizing Human-machine Agreement
Proceedings of the 2013 Conference on Empirical Methods in Natural Language Processing
the 2013 Conference on Empirical Methods in Natural Language ProcessingSeattle, Washington, USAAssociation for Computational Linguistics18-21 October 2013. 2013
Previous approaches for automated essay scoring (AES) learn a rating model by minimizing either the classification, regression, or pairwise classification loss, depending on the learning algorithm used. In this paper, we argue that the current AES systems can be further improved by taking into account the agreement between human and machine raters. To this end, we propose a rankbased approach that utilizes listwise learning to rank algorithms for learning a rating model, where the agreement between the human and machine raters is directly incorporated into the loss function. Various linguistic and statistical features are utilized to facilitate the learning algorithms. Experiments on the publicly available English essay dataset, Automated Student Assessment Prize (ASAP), show that our proposed approach outperforms the state-of-the-art algorithms, and achieves performance comparable to professional human raters, which suggests the effectiveness of our proposed method for automated essay scoring.
Introduction
Automated essay scoring utilizes the NLP techniques to automatically rate essays written for given prompts, namely, essay topics, in an educational setting (Dikli, 2006). Nowadays, AES systems have been put into practical use in large-scale English tests and play the role of one human rater. For example, before AES systems enter the picture, essays in the writing assessment of Graduate Record Examination (GRE) are rated by two human raters. A third human rater is needed when the difference of the scores given by the two human raters is larger than one in the 6-point scale. Currently, GRE essays are rated by one human rater and one AES system. A second human rater is required only when there exists a non-negligible disagreement between the first human rater and the machine rater. With the help of an AES system that highly agrees with human raters, the human workload can be reduced by half at most. Therefore, the agreement between the AES system and the human rater is an important indicator of an AES system's effectiveness.
There have been efforts in developing AES methods since the 1960s. Various kinds of algorithms and models based on NLP and machine learning techniques have been proposed to implement AES systems. Existing approaches consider essay rating as a classification (Larkey, 1998), regression (Attali and Burstein, 2006) or preference ranking problem (Yannakoudakis et al., 2011), where the loss function is the regression loss, classification loss and pairwise classification loss, respectively. In this paper, we argue that the purpose of AES is to predict the essay's rating that human raters would give. If an AES system frequently disagrees with the first human rater, a second human rater will be needed in most cases. Thus, the introduction of the AES system does not bring much benefit in reducing the human workload. It is therefore desirable to minimize the disagreement between the machine and human raters. However, this disagreement is not explicitly, if any, addressed in the current AES methods.
To this end, we propose a rank-based approach in this paper that utilizes a listwise learning to rank algorithm to address automated essay scoring in the view of directly optimizing the agreement between human raters and the AES system. Different from the preference ranking-based approach (Yannakoudakis et al., 2011) that maximizes the pairwise classification precision (Liu, 2009), our rank-based approach follows the listwise learning paradigm and the agreement between the machine and human raters is directly integrated into the loss function that is optimized by gradient boost regression trees.
To the best of our knowledge, this work is the first to apply listwise learning to rank approach for AES, which aims at the optimization of the agreement between the human and machine raters. Experimental results on the publicly available dataset ASAP indicate that our proposed method achieves high agreement with human raters, that is about 0.80, measured by quadratic weighted Kappa (Brenner and Kliebsch, 1996). Our proposed method also outperforms the previous classification, regression and preference ranking based approaches. As it is widely accepted that the agreement between human raters, measured by either quadratic weighted Kappa or Pearson's correlation coefficient, ranges from 0.70 to 0.80 (Powers et al., 2000) (Williamson, 2009), our proposed approach therefore performs as well as human raters.
The rest of this paper is organized as follows. In section 2, we introduce the research background of automated essay scoring and give a brief introduction to learning to rank. In section 3, a detailed description of our listwise learning to rank approach for automated essay scoring is presented. Section 4 explains the experimental setup and section 5 presents the experimental results. Finally, in section 6 we conclude this research.
Related Work and Background
Firstly, we give a brief description of existing approaches for AES in section 2.1. Then, an introduction to learning to rank is presented in section 2.2.
Existing AES Methods
In general, existing solutions consider AES as a learning problem. Based on a large number of predefined objectively measurable features, various learning techniques, including classification, regres-sion and preference ranking, are applied (Larkey, 1998) (Yannakoudakis et al., 2011).
Regression-based approach treats feature values and essay score as independent variables and dependent variable, respectively, and then learns a regression equation by classical regression algorithms, such as support vector regression (Vapnik et al., 1996). In 1966, the first AES system, Project Essay Grader, was developed by Ellis Page upon the request of the American College Board. The PEG system defines a large set of surface text features from essays, e.g. fourth root of essay length, and uses regression-based approach to predict the score that human raters will give. E-rater, developed by Educational Testing Services (ETS) in America, in late 1990s, is a commercial AES system which has been put into practical use in the Graduate Record Examination (GRE) and the Test of English as a Foreign Language (TOEFL). The E-rater system uses natural language processing techniques to extract various kinds of linguistic features of essays, such as lexical, syntactic, grammar, etc.. Then it predicts the final score by the stepwise regression method (Attali and Burstein, 2006).
The classification-based approach sees essay scores as in-discriminative class labels and uses classical classification algorithms, e.g. the K-nearest neighbor (KNN) and the naive Bayesian model, to predict to which class an essay belongs, where a class is associated to a numeric rating. Intelligent Essay Assessor (IEA) (Foltz et al., 1999), developed also in late 1990s, evaluates essay by measuring semantic features. Each ungraded essay, represented by a semantic vector generated by Latent Semantic Analysis (LSA) (Dumais, 2005), is rated according to the similarity degree with semantic vectors of graded essays. Bayesian Essay Test Scoring sYstem, developed by Larkey in 2003, is based on naive Bayesian model. It is the only open-source AES system, but has not been put into practical use yet.
Besides classification and regression-based approaches, (Yannakoudakis et al., 2011) proposed a preference ranking based approach for learning a rating model, where a ranking function or model is learned to construct a global ordering of essays based on writing quality. It is also the first study of rank-based approach in automated essay scoring. Although "learning to rank" is not mentioned in their paper, the algorithm they used, Ranking SVM (svm-light package with "-z p" option), is actually a pairwise approach. We will give a brief introduction to learning to rank in section 2.2.
The AES systems can be deployed in two different manners, namely prompt-specific and generic. A prompt-specific rating model is built for a specific prompt and designed to be the best rating model for the particular prompt (Williamson, 2009). For different prompts, the features used, their weights, and scoring criteria, may be different. It usually requires several hundreds of graded essays for training, which is time-consuming and usually impractical in a classroom environment. Generic rating model is trained from essays across a group of prompts and designed to be the best fit for predicting human scores for all prompts. It usually does not consider prompt-specific features and just takes writing quality into account. Generic rating model evaluates essays across all prompts with the same scoring criteria, which is more consistent with the human rubric that is usually the same for all prompts, and therefore has validity-related advantages (Attali et al., 2010).
Learning to Rank
Learning to rank, also called machine-learned ranking, was originally proposed to settle the ranking problem in information retrieval (IR) (Liu, 2009). It is a type of supervised or semi-supervised machine learning algorithm that automatically construct a ranking model or function from training data.
Current learning to rank algorithms fall into three categories, that is, the pointwise, pairwise, listwise approaches. Pointwise approach takes individual documents as training examples for learning a scoring function. In fact, both multiple linear regression and support vector regression (Vapnik et al., 1996), which have been widely used in automated essay scoring (Shermis and Burstein, 2002), can be seen as pointwise approaches. Pairwise approaches process a pair of documents each time and usually model ranking as a pairwise classification problem. Thus, the loss function is always a classification loss. Representative algorithms are ranking SVM (Joachims, 2006), RankNet (Li et al., 2007), etc.. (Yannakoudakis et al., 2011) apply pairwise approach, ranking SVM, to automated essay scoring and achieve better performance than support vector regression. In listwise approaches, ranking algorithms process a list of documents each time and the loss function aims at measuring the accordance between predicted ranking list and the ground truth label. Representative algorithms are LambdaMart (Wu et al., 2008), RankCosine (Qin et al., 2008), etc.. Listwise approach has not yet been used in automated essay scoring.
Automated Essay Scoring by
Maximizing Human-machine Agreement
The main work-flow of our proposed approach is as follows. Firstly, a set of essays rated by professional human raters are gathered for the training. A listwise learning to rank algorithm learns a ranking model or function using this set of human rated essays represented by vectors of the pre-defined features. Then the learned ranking model or function outputs a model score for each essay, including both rated and unrated essays, from which a global ordering of essays is constructed. Finally, the model score is mapped to a predefined scale of valid ratings, such as an integer from 1 to 6 in a 6-point scale In this section, we give a detailed description of our listwise learning to rank approach for AES in section 3.1. And features used in our approach are presented in section 3.2.
Listwise Learning to Rank for AES
Our choice of the listwise learning to rank algorithm is due to the fact that it takes the entire set of labeled essays associated to a given prompt, instead of the individual essays or essay pairs as in (Yannakoudakis et al., 2011), as training examples. This brings us the convenience of easily embedding the inter-rater agreement into the loss function for the learning.
In this paper, we deploy LambdaMART (Wu et al., 2008), a listwise learning to rank algorithm and then use Random Forests (RF) (Breiman, 2001) for the bagging of LambdaMART learners. Having been widely used in information retrieval applications, LambdaMART is one of the most effective learning to rank algorithms. For instance, it achieves the top results in the 2010 Yahoo! Learning to Rank challenge (Burges, 2010). Random Forests is an ensemble learning method for classification and regression.
Previously, the loss function of LambdaMART is defined as the gradient loss of the retrieval effectiveness, measured by IR evaluation criteria such as Normalized Discounted Cumulative Gain (nDCG) (Wu et al., 2008). More specifically, it is a heuristic method that directly defines λ, the gradient of nDCG with respect to the model score of each document, and has been shown to work empirically for particular loss functions NDCG (Yue and Burges, 2007). Then, Multiple Additive Regression Trees (MART) (Friedman, 2000), also called Gradient Boosting Decision Tree (GBDT) 1 , are used to "learn" these gradients iteratively. MART is a class of boosting algorithms that performs gradient descent in function space, using regression trees. Its output F (x) can be written as
F (x) = ∑ i α i f i (x), i = 1, 2, ....N . Each f i (x)
is a function modeled by a single regression tree and the α i is the corresponding weight. Given that n trees have been trained, the (n+1)th regression tree, f i+1 (x), models the derivative of the cost with respect to the current model score at each training point. Thus, what remains is to compute the derivative.
As for the automated essay scoring, Lamb-daMART is not readily available since its loss function is defined as a function of the gradient of IR evaluation measures. While such measures focus on the top-ranked documents that are of great importance to the IR applications, they are not suitable to our study. This is because for AES, the rating prediction of all essays equally matters, no matter what ratings they receive.
It is therefore necessary to re-define the λ. Specifically, we need to define the gradient of the evaluation criteria in AES, e.g. quadratic weighted Kappa (Brenner and Kliebsch, 1996) and Pearson's correlation coefficient, with respect to the model score of each essay. In this paper, we use quadratic weighted Kappa as the evaluation metric. Kappa (Cohen and others, 1960) is a statistical metric which is used to measure inter-rater agreement. Quadratic weighted Kappa takes the degree of disagreement between raters into account. This measuring method is widely accepted as a primary evaluation metric for the AES tasks. For instance, it is the official evaluation metric in the Automated Student Assessment Prize sponsored by Hewlett Foundation 2 . We denote our modified LambdaMART as K-LambdaMART in which K stands for the Kappa-based gradient function. Specific steps include the following:
To begin with, we re-define the λ i,j for each pair of essays. For a pair of essays, essay i and essay j, λ i,j is defined as the derivative of RankNet (Li et al., 2007) loss function multiplied by the Quadratic weighted Kappa gain after exchanging the two essays' ratings.
λ i,j = −δ 1 + e δ(s i −s j ) |∆ Kappa |(1)
s i and s j are the model scores for essay i and essay j, respectively. δ is a parameter which determines the shape of the sigmoid. Quadratic weighted Kappa are calculated as follows:
κ = 1 − ∑ i,j ω i,j O i,j ∑ i,j ω i,j E i,j(2)
In matrix O, O i,j corresponds to the number of essays that received a score i by human rater and a score j by the AES system. In matrix ω, ω i,j is the difference between raters scores (i−j) 2 (N −1) 2 , where N is the number of possible ratings. Matrix E is calculated as the outer product between the two raters vectors of scores, normalized such that E and O have the same sum.
It is necessary to define the quadratic weighted Kappa gain, namely ∆ Kappa , in an explicit manner. In each iteration, every essay is ranked by its model score and then rated according to its ranking position. For example, for five essays e 1 , e 2 , e 3 , e 4 , e 5 with actual ratings 5, 4, 3, 2, 1, if the ranking (by model score) is e 3 , e 4 , e 1 , e 5 , e 2 , we assume that e 3 , e 4 , e 1 , e 5 , e 2 will get ratings of 5, 4, 3, 2, 1, over which quadratic weighted kappa gain can be calculated.
After the definition of λ i,j for each pair of essays, it is time to re-define the λ, the gradient for each essay. Let I denote the set of pairs of indices ⟨i, j⟩, in which essay i receive a higher rating than essay j.
Set I must include each pair just once. Then, the λ gradient for each essay, e.g. essay i, is defined as,
λ i = ∑ j:⟨i,j⟩∈I λ i,j − ∑ j:⟨j,i⟩∈I λ i,j ;
(3)
The rational behind the above formulae is as follows. For each of the essays in the whole essay collection associated with the same prompt, e.g. essay i, the gradient λ i is incremented by a positive value λ i,j when coming across another essay j that has a lower rating. The value of λ i,j is weighted by the quadratic weighted Kappa gain after exchanging the two essays' ratings. On the contrary, the gradient λ i will be incremented by a negative value −λ i,j when the another essay has a higher rating. As a result, after each iteration of MART, essays with higher rating tend to receive a higher model score while essays with lower rating tend to get a lower model score.
After the training process, the ranking model outputs an unscaled model score for each ungraded essay. To determine the final rating of each given unrated essay, we have to map this unscaled model score to the predefined scale, such as an integer from 1 to 6 in a 6 point scale. The mapping process is as follows. To begin with, the learned ranking model also computes an unscaled model score for each essay in the training set. As the model is trained by learning to rank algorithms, essays with higher model scores tend to get higher actual ratings. In other words, essays with close model scores tend to get the same rating. Therefore, we select the k essays whose model scores are closest to the given essay. We then remove the essays with the very highest and lowest model scores within the k. The final rating is the mean of the remaining k − 2 essays' ratings. In this paper, k is empirically set to 5, obtained in our preliminary experiments on the ASAP validation set.
Finally, the Random Forests algorithm is used to bag K-LambdaMART learners. During the training process, both features and samples are randomly selected for each K-LambdaMART learner. In the testing phase, it outputs a score for each testing sample that is the mode of the scores output by each K-LambdaMART learner.
Pre-defined Features
We pre-define four types of features that indicate the essay quality, including lexical, syntactical, grammar and fluency, content and prompt-specific features. A brief description of these four classes of features is given below. Lexical features: We define 4 subsets of lexical features. Each subset of features consists of one or several sub features.
-Statistics of word length: The number of words with length in characters larger than 4, 6, 8, 10, 12, respectively. The mean and variance of word length in characters.
-word level: All words in Webster dictionary 3 are divided into 8 levels according to the College Board Vocabulary Study (Breland et al., 1994). The higher level a word belongs to, the more sophisticated vocabulary usage it indicates. For example, words like thoroughfare, percolate are in level 8, while words with the same meanings, street, filter, belong to level 1. We count the number of words that belong to each level and calculate the mean word level of a given essay.
-Unique words: The number of unique words appeared in each essay, normalized by the essay length in words.
-Spelling errors: The number of spelling errors detected by the spelling check API provided by Google 4 . Syntactical features: There are 4 subsets of syntactical features.
-Statistics of sentence length: The number of sentences with length in words larger than 10, 18, 25, respectively. The mean and variance of sentence length in words.
-Subclauses: The mean number of subclauses in each sentence, normalized by sentence length in words. The mean subclause length in words. Subclauses are labeled as "SBAR" in the parser tree generated by a commonly used NLP tool, Stanford Core NLP , which is an integrated suite of natural language processing tools for English in Java 5 , including part-of-speech tagging, parsing, co-reference, etc..
-Sentence level: The sum of the depth of all nodes in a parser tree generated by Stanford Core NLP. The height of the parser tree is also incorpo-rated into the feature set.
-Mode, preposition, comma: The number of modes, prepositions and commas in each sentence respectively, normalized by sentence length in words. Part of speech (POS) is detected by Stanford Core NLP (Toutanova et al., 2003). The POS tags of modal verb and preposition are "MD" and "IN", respectively. Grammar and fluency features: There are two subsets of grammar and fluency features.
-Word bigram and trigram: We evaluate the grammar and fluency of an essay by calculating mean tf/TF of word bigrams and trigrams (Briscoe et al., 2010) (tf is the term frequency in a single essay and TF is the term frequency in the whole essay collection). We assume a bigram or trigram with high tf/TF as a grammar error because high tf/TF means that this kind of bigram or trigram is not commonly used in the whole essay collection but appears in the specific essay.
-POS bigram and trigram: Mean tf/TF of POS bigrams and trigrams. The reason is the same with word bigrams and trigrams. Content and prompt-specific features: We define four subsets of content and prompt-specific features.
-Essay length: Essay length in characters and words, respectively. The fourth root of essay length in words is proved to be highly correlated with the essay score (Shermis and Burstein, 2002).
-Word vector similarity: Mean cosine similarity of word vectors, in which the element is the term frequency multiplied by inverse document frequency (tf-idf) (Salton, 1971) of each word. It is calculated as the weighted mean of all cosine similarities and the weight is set as the corresponding essay score.
-Semantic vector similarity: Semantic vectors are generated by Latent Semantic Analysis (Dumais, 2005). The calculation of mean cosine similarity of semantic vectors is the same with word vector similarity.
-Text coherence: Coherence in writing means that all the ideas in a paragraph flow smoothly from one sentence to the next. We only consider nouns and pronouns in each sentence as they convey more information. The relevance degree between one sentence and its next in the same paragraph is calculated as the sum of the similarity degrees between nouns and pronouns appeared in the two sentences, normalized by the sum of the two sentences' length in words. The similarity degree between words is set to 1 if coreference exists, indicated by Stanford Core NLP (Lee et al., 2013). Otherwise, it is measured by WordNet similarity package (Pedersen et al., 2004). Finally, text coherence is computed as the average relevance degree of all pairs of neighbored sentences.
The rating model is learned off-line using a set of training essays. For a given target essay, it is the feature extraction that mainly accounts for the overhead. In our experiments, it usually costs in average no more than 10 seconds on a desktop PC with an Intel i5-2410M CPU running at 2.3GHZ to extract the pre-defined features and predict a rating for a given essay, which is affordable, compared to the cost of a human rater.
Experimental Setup
This section presents our the experimental design, including the test dataset used, configuration of testing algorithms, feature selection and the evaluation methodology.
Test Dataset
The dataset used in our experiments comes from the Automated Student Assessment Prize (ASAP) 1 , which is sponsored by the William and Flora Hewlett Foundation. Dataset in this competition 6 consists of eight essay sets. Each essay set was generated from a single prompt. The number of essays associated with each prompt ranges from 900 to 1800 and the average length of essays in word in each essay set ranges from 150 to 650. All essays were written by students in different grades and received a resolved score, namely the actual rating, from professional human raters. Moreover, ASAP comes with a validation set that can be used for parameter training. There is no overlap between this validation set and the test set used in our evaluation.
In AES, the agreement between human-machine rater is the most important measurement of success. We use quadratic weighted Kappa to evaluate the agreement between the ratings given by the AES algorithm and the actual ratings. It is widely accepted as a reasonable evaluation measure for AES systems (Williamson, 2009), and is also the official evaluation measure in the ASAP AES competition. It is calculated on all essay topics. If there are essays that come from n essay topics, we calculate the agreement degree on each essay topic first and then compute the overall agreement degree in the z-space. In addition, analysis of variance (ANOVA) (Scheffe, 1999) is conducted to test whether significant difference exists between the two groups of scores given by human and machine raters.
Configuration of Testing Algorithms
Random Forests bagging K-LambdaMart We denote our proposed method K-LambdaMART where K stands for the Kappa-based gradient. Our implementation of RF bagging K-LambdaMART is based on the open-source RankLib toolkit 7 , a library of learning to rank algorithms, in which many popular learning to rank algorithms have been implemented, e.g. LambdaMART and RankNet (Li et al., 2007). Empirical settings of parameters obtained by preliminary experiments on the ASAP validation set are as follows. For bagging: the number of bags is set to 300, subsampling rate is 0.80 and feature sampling rate is 0.50. For LambdaMART in each bag: the number of trees is set to 1, the number of tree leaves is 100 and other parameters are set to default.
Baseline algorithms We use classical machine learning algorithms, support vector machine (SVM) for classification, regression (Vapnik et al., 1996) and preference ranking (Joachims, 2006), respectively, as the baselines. These three algorithms have been used for AES in the literature (Briscoe et al., 2010) (Yannakoudakis et al., 2011). Especially, the state-of-the-art AES approach proposed by (Yannakoudakis et al., 2011) utilizes the SVM for preference ranking, a pairwise learning to rank algorithm, for training a rating model. The linear kernel is used in the experiments. The parameter C, which controls the trade-off between empirical loss and regularizer, is set by grid search on the ASAP validation set.
The original LambdaMART is not included in the baseline algorithms as it has been shown that the performance of LambdaMART is inferior to ranking 6 http://www.kaggle.com/c/asap-sas/data SVM on the same dataset (Chen et al., 2012).
Feature Selection
Although machine learning approaches usually use the all features available for training, we try to obtain an carefully selected feature set that can withstand the scrutiny of construct validity in assessment development (Chen and Zechner, 2011). Specific steps of feature selection conducted on individual features are as follows:
To begin with, the importance of the features is determined by computing each features Pearson correlation coefficient with human raters scores based on the training set (Chen and Zechner, 2011). Features whose absolute Pearson correlation coefficient with human scores are lower than 0.20 are removed from the feature set.
Next, we calculate the inter-correlation degrees between these features. For each pair of features whose Pearson correlation coefficient larger than 0.90, one of them should be removed. The criteria for feature removing is as follows. Firstly, at least one feature in each subset of features should be remained. Satisfying the first prerequisite condition, the removed one should be linguistically less meaningful than the remaining one.
For prompt-specific rating model, feature selection is conducted on the essays associated with the same prompt. For generic rating model, the final feature set used for training is the intersection of the 8 feature sets for prompt-specific rating model.
For space reason, here we briefly summarize the feature selection results. Among the lexical features, word length in characters larger than 8 and 10, number of words in each of the levels from 3 to 6, number of unique words, and number of spelling errors are mostly selected. As for the syntactical features, sentence length in words larger than 18 and 25, number of commas, mean clause length and the mean depth of parser tree are usually selected. Among the grammar and fluency features, mean tf/TF of word bigrams and mean tf/TF of POS trigrams are always selected. For content and prompt-specific features, essay length in words, word vector and semantic vector similarity with high rated essays, text coherence are usually selected for training a prompt- specific rating model. When it comes to the generic rating model, the prompt-specific features like word vector similarity and semantic vector similarity, are removed.
Evaluation Methodology
We conduct three sets of experiments to evaluate the effectiveness of our listwise learning to rank approach for automated essay scoring. The first set of experiments evaluates our proposed approach under a prompt-specific setting. We conduct 5-fold cross-validation, where the essays of each prompt are randomly partitioned into 5 subsets. In each fold, 4 subsets are used for training, and one is used for testing. To avoid bias introduced by the random partition, we repeat the 5-fold crossvalidation for 5 times on 5 different random partitions. The overall quadratic weighted Kappa is averaged on all 25 test subsets.
It should be noticed that in random partition of the whole dataset, the overlap between any two partitions should be kept below 1.5 * 1/(#f olds) * 100%. For example, in five-fold cross validation, the overlap should be kept below 30%. This is because: according to the Dirichlet principle (Courant, 2005), each subset in one partition overlaps more than 20% with at least one subset in another partition in fivefold cross-validation. The tolerance boundary parameter is then set to 1.5.
The objective of the second set of experiments is to test the performance of our listwise learning to rank approach for generic rating models. We also conduct 5 times 5-fold cross-validation like the first experiment. In 5-fold cross-validation, essays associated with the same prompt are randomly partitioned into 5 subsets. In this way, each fold consists of essays across all prompts. The overall performance is averaged on all 25 test subsets.
In the third set of experiments, we evaluate the quality of the features used in our rating model by feature ablation test and feature unique test. In ablation test, we evaluate our essay rating model's performance before and after the removal of a subset of features from the whole feature set. The performance difference indicates the removed features' contribution to the rating model's overall performance. In unique test, only a subset of features are used in the rating model construction and all other features are removed. The learned rating model's performance indicates to which extent the features are correlated with the actual essay ratings. Table 1 presents the first set of experimental results obtained on the ASAP dataset, measured by quadratic weighted Kappa. In Table 1, RF stands for random forests. SVMc, SVMr, SVMp are SVM for classification, regression and preference ranking, respectively. ANOVA stands for variance analysis, which aims to test whether significant difference exists between the scores given by human and machine raters. The improvement of our RF bagging K-LambdaMART over each baseline in percentage is also given.
Experimental Results
Evaluation Results
For prompt-specific rating model, all of these algorithms achieve good performance comparable to human raters as literatures have revealed that the agreement between two professional human raters (measured by statistics for correlation analysis, e.g. quadratic weighted Kappa) is around 0.70 to 0.80 (Williamson, 2009) (Williamson, 2009). It is clear that our listwise learning to rank approach, Random Forests bagging K-LambdaMART, gives the best performance on the ASAP dataset. The variance analysis result on the six groups of scores (scores given by five times of five-fold cross-validation and the scores provided by human rater), no significant difference, suggests the robustness of our proposed approach. On the contrary, although pref-erence ranking based approach, SVM for ranking, and regression based approach, SVM for regression, give very good result in human-machine agreement, their variance analysis results indicate that there exists significant difference between the scores given by human and machine raters. The result of the first set of experiments suggests the effectiveness and robustness of our listwise learning to rank approach in the building of prompt-specific rating model. For generic rating model, one can conclude from Table 1 that RF bagging LambdaMART performs better than SVM for classification, regression and preference ranking on the ASAP dataset. The dataset used in our experiment consists of essays generated by 8 prompts and each prompt has its own features. With such a training set, both classification and regression based approaches produce not good results, as it is commonly accepted that rating model whose performance measured by interrater agreement lower than 0.70 is not applicable (Williamson, 2009). And the variance analysis results also reveal that there exists statistically significant difference between the scores given by human and machine raters, indicating a low robustness of these two baselines. The performance comparison of the generic rating models suggest that the rank based approaches, SVMp and RF bagging K-LambdaMART, are more effective than the classification based SVMc and the regression based SVMr, while our proposed RF bagging K-LambdaMART outperforms the state-of-the-art SVMp. Moreover, we find that there is no obvious performance difference when our proposed method is applied to prompt-specific and generic rating models. Considering the advantages generic rating models have, the result of the second set of experiments suggests the feasibility of building a rating model which is generalizable across different prompts while performs slightly inferior to the prompt-specific rating model. Table 2 gives the results of feature ablation and unique test. In the table, "All features" stands for the use of all the features available, apart from the prompt-specific features that are not applicable to learning a generic model. In other rows, the feature subset name stands for the feature subset to be ablated in ablation test and the feature subset to be used in unique test. Note that we ablate (as in the ablation test) or use (as in the unique test) a subset of features such as the different statistics of word length as a whole since features belonging to the same subset are usually highly correlated.
Feature Analysis
Among the lexical features, the two feature subsets, word level and statistics of word length, are highly correlated with essay score in both promptspecific and generic rating models. This observation was expected since word usage is an important notion of writing quality, regardless of essay topics.
In the syntactical features, the feature subset, sentence level, measured by the height and depth of the parser tree, correlates the most with essay score. One can infer that long sentences with nested subclauses tend to improve the final ratings.
All grammar and fluency features achieve performance around 0.60 in feature unique test for promptspecific rating model. What is more, during feature selection, we find that the Pearson's correlation coefficient between the feature values and the final ratings in each essay prompt ranges from -0.20 to -0.60, which suggests that our method to estimate the number of grammar errors is applicable because it is widely accepted that in the evaluation of student essays, essays with more grammar errors tend to receive lower ratings.
Among the content and prompt-specific features, essay length and word vector similarity features give good results in feature unique test. The fourth root of essay length in words has been proved to be a highly correlated feature by many works on AES (Shermis and Burstein, 2002). Word vector similarity feature measures prompt-specific vocabulary usage, which is also important to essay evaluation.
In ablation test, there is no significant performance decrease no matter what feature subset is removed. It seems that each feature subset contributes little to the overall performance and therefore can be removed. However, the result of feature unique test suggests that most features used in our rating model are in fact highly correlated with the writing quality.
Conclusions and Future Work
We have proposed a listwise learning to rank approach to automated essay scoring (AES) by directly incorporating the human-machine agreement into the loss function. Experiments on the public English dataset ASAP show that our approach outperforms the state-of-the-art algorithms in both promptspecific and generic rating settings. Moreover, it is widely accepted that the agreement between professional human raters ranges from 0.70 to 0.80, measured by quadratic weighted Kappa or Pearson's correlation (Powers et al., 2000) (Williamson, 2009). In the experiments, our approach achieves a quadratic weighted Kappa around 0.80 for prompt-specific rating and around 0.78 for generic rating, suggesting its potential in automated essay scoring.
Most existing research on AES focus on training a prompt-specific rating model. While such approaches have the advantage of providing a satisfactory rating accuracy for essays written for a specific topic, they also suffer from validity and feasibility problem as a significant amount of training data, namely essays with human ratings, are required for every given essay topic (Attali et al., 2010). It is therefore appealing to develop an approach that learns a generic model with acceptable rating accuracy, since it has both validity-related and logistical advantages. In our future work, we plan to continue the research on generic rating model. Because of the diversification of writing features of essays associated with different prompts, a viable approach is to explore more generic writing features that can well reflect the writing quality.
Table 1 :
1Cross-validation on ASAP dataset measured by quadratic weighted Kappa.Algorithm
Prompt-specific
ANOVA
Generic
ANOVA
SVMc (baseline)
0.7302(9.75%)
Significant
0.6319(23.93%)
Significant
SVMr (baseline)
0.7861(1.95%)
Significant
0.7022(11.52%)
Significant
SVMp (baseline)
0.7876(1.75%)
Significant
0.7669(2.11%)
Not significant
RF bagging K-LambdaMART
0.8014
Not significant
0.7831
Not significant
Table 2 :
2Results of feature ablation and unique testFeature subset
Prompt-specific
Generic
All Features
0.8014
0.7831
Ablation Unique
Ablation Unique
Lexical features
Statistics of word length
0.7763
0.7512
0.7801
0.7350
Word level
0.7834
0.7582
0.7779
0.7306
Unique words
0.7766
0.6737
0.7692
0.6786
Spelling errors
0.7724
0.6863
0.7730
0.6742
Syntactical features
Statistics of sentence length 0.7856
0.6410
0.7684
0.7025
Subclauses
0.7862
0.5473
0.7813
0.5050
Sentence level
0.7749
0.7046
0.7796
0.6955
Mode, preposition, comma
0.7847
0.5860
0.7807
0.5606
Grammar and fluency features
Word bigrams and trigrams
0.7813
0.6017
0.7824
0.4395
POS bigrams and trigrams
0.7844
0.6410
0.7786
0.6022
Content and prompt-specific features
Essay length
0.7930
0.7502
0.7736
0.7390
Word vector similarity
0.7658
0.7001
-
-
Semantic vector similarity
0.7924
0.5683
-
-
Text coherence
0.7863
0.6947
0.7798
0.6367
For space reason, we refer the readers to(Friedman, 2000),(Breiman, 2001) for details of MART, GBDT and Random Forests.
http://www.kaggle.com/c/asap-sas
http://www.merriam-webster.com/ 4 http://code.google.com/p/google-api-spelling-java/ 5 http://nlp.stanford.edu/software/corenlp.shtml
http://people.cs.umass.edu/ vdang/ranklib.html
AcknowledgementsThis work is supported in part by the National Natural Science Foundation of China (61103131/F020511), the President Fund of UCAS (Y15101FY00/Y25102HN00), and the National Key Technology R&D Program of China (2012BAH23B03).
Automated essay scoring with e-rater R ⃝ v. 2. The Journal of Technology. Y Attali, J Burstein, Learning and Assessment. 43Y. Attali and J. Burstein. 2006. Automated essay scoring with e-rater R ⃝ v. 2. The Journal of Technology, Learn- ing and Assessment, 4(3).
Performance of a generic approach in automated essay scoring. Yigal Attali, Brent Bridgeman, Catherine Trapani, The Journal of Technology, Learning and Assessment. 103Yigal Attali, Brent Bridgeman, and Catherine Trapani. 2010. Performance of a generic approach in auto- mated essay scoring. The Journal of Technology, Learning and Assessment, 10(3).
Random forests. L Breiman, Machine learning. 451L. Breiman. 2001. Random forests. Machine learning, 45(1):5-32.
The college board vocabulary study. H M Breland, R J Jones, L Jenkins, College Entrance Examination BoardH.M. Breland, R.J. Jones, and L. Jenkins. 1994. The college board vocabulary study. College Entrance Ex- amination Board.
Dependence of weighted kappa coefficients on the number of categories. Hermann Brenner, Ulrike Kliebsch, Epidemiology. Hermann Brenner and Ulrike Kliebsch. 1996. Depen- dence of weighted kappa coefficients on the number of categories. Epidemiology, pages 199-202.
Automated assessment of esol free text examinations. T Briscoe, B Medlock, Ø Andersen, UCAM-CL-TR-790University of Cambridge Computer Laboratory Technical ReportsTechnical reportT. Briscoe, B. Medlock, and Ø. Andersen. 2010. Au- tomated assessment of esol free text examinations. Technical report, University of Cambridge Computer Laboratory Technical Reports, UCAM-CL-TR-790.
From ranknet to lambdarank to lambdamart: An overview. Learning. C Burges, 11C. Burges. 2010. From ranknet to lambdarank to lamb- damart: An overview. Learning, 11:23-581.
Computing and evaluating syntactic complexity features for automated scoring of spontaneous non-native speech. Miao Chen, Klaus Zechner, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics. the 49th Annual Meeting of the Association for Computational LinguisticsMiao Chen and Klaus Zechner. 2011. Computing and evaluating syntactic complexity features for automated scoring of spontaneous non-native speech. In Pro- ceedings of the 49th Annual Meeting of the Association for Computational Linguistics, pages 722-731.
A ranked-based learning approach to automated essay scoring. Hongbo Chen, Ben He, Tiejian Luo, Baobin Li, Cloud and Green Computing (CGC), 2012 Second International Conference on. IEEEHongbo Chen, Ben He, Tiejian Luo, and Baobin Li. 2012. A ranked-based learning approach to automated essay scoring. In Cloud and Green Computing (CGC), 2012 Second International Conference on, pages 448- 455. IEEE.
A coefficient of agreement for nominal scales. Educational and psychological measurement. Jacob Cohen, 20Jacob Cohen et al. 1960. A coefficient of agreement for nominal scales. Educational and psychological mea- surement, 20(1):37-46.
Dirichlet's principle, conformal mapping, and minimal surfaces. Richard Courant, Courier Dover PublicationsRichard Courant. 2005. Dirichlet's principle, conformal mapping, and minimal surfaces. Courier Dover Publi- cations.
An overview of automated scoring of essays. S Dikli, The Journal of Technology, Learning and Assessment. 51S. Dikli. 2006. An overview of automated scoring of essays. The Journal of Technology, Learning and As- sessment, 5(1).
Latent semantic analysis. Annual Review of Information Science and Technology. S T Dumais, 38S.T. Dumais. 2005. Latent semantic analysis. An- nual Review of Information Science and Technology, 38(1):188-230.
Automated essay scoring: Applications to educational technology. W Peter, Darrell Foltz, Thomas K Laham, Landauer, World Conference on Educational Multimedia, Hypermedia and Telecommunications. 1999Peter W Foltz, Darrell Laham, and Thomas K Landauer. 1999. Automated essay scoring: Applications to edu- cational technology. In World Conference on Educa- tional Multimedia, Hypermedia and Telecommunica- tions, volume 1999, pages 939-944.
Greedy function approximation: A gradient boosting machine. H Jerome, Friedman, Annals of Statistics. 29Jerome H. Friedman. 2000. Greedy function approxima- tion: A gradient boosting machine. Annals of Statis- tics, 29:1189-1232.
Training linear svms in linear time. T Joachims, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining. the 12th ACM SIGKDD international conference on Knowledge discovery and data miningACMT. Joachims. 2006. Training linear svms in linear time. In Proceedings of the 12th ACM SIGKDD interna- tional conference on Knowledge discovery and data mining, pages 217-226. ACM.
Accurate unlexicalized parsing. Dan Klein, D Christopher, Manning, Proceedings of the 41st Annual Meeting on Association for Computational Linguistics. the 41st Annual Meeting on Association for Computational Linguistics1Association for Computational LinguisticsDan Klein and Christopher D Manning. 2003. Ac- curate unlexicalized parsing. In Proceedings of the 41st Annual Meeting on Association for Computa- tional Linguistics-Volume 1, pages 423-430. Associ- ation for Computational Linguistics.
Automatic essay grading using text categorization techniques. L S Larkey, Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval. the 21st annual international ACM SIGIR conference on Research and development in information retrievalACML.S. Larkey. 1998. Automatic essay grading using text categorization techniques. In Proceedings of the 21st annual international ACM SIGIR conference on Research and development in information retrieval, pages 90-95. ACM.
Deterministic coreference resolution based on entitycentric, precision-ranked rules. Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, Dan Jurafsky, Computational Linguistics. 394Heeyoung Lee, Angel Chang, Yves Peirsman, Nathanael Chambers, Mihai Surdeanu, and Dan Jurafsky. 2013. Deterministic coreference resolution based on entity- centric, precision-ranked rules. Computational Lin- guistics, 39(4).
Learning to rank using classification and gradient boosting. P Li, C Burges, Q Wu, Proceedings of the International Conference on Advances in Neural Information Processing Systems (NIPS). the International Conference on Advances in Neural Information Processing Systems (NIPS)P. Li, C. Burges, and Q. Wu. 2007. Learning to rank us- ing classification and gradient boosting. In Proceed- ings of the International Conference on Advances in Neural Information Processing Systems (NIPS).
Learning to rank for information retrieval. T Y Liu, Foundations and Trends in Information Retrieval. 33T.Y. Liu. 2009. Learning to rank for information re- trieval. Foundations and Trends in Information Re- trieval, 3(3):225-331.
Wordnet::similarity: measuring the relatedness of concepts. Ted Pedersen, Siddharth Patwardhan, Jason Michelizzi, Demonstration Papers at HLT-NAACL 2004. Boston, Massachusetts, 2-7 May; Stroudsburg, PA, USAAssociation for Computational LinguisticsTed Pedersen, Siddharth Patwardhan, and Jason Miche- lizzi. 2004. Wordnet::similarity: measuring the relat- edness of concepts. In Demonstration Papers at HLT- NAACL 2004, pages 38-41, Boston, Massachusetts, 2-7 May. Association for Computational Linguistics, Stroudsburg, PA, USA.
Comparing the validity of automated and human essay scoring. E Donald, Jill C Powers, Martin Burstein, Mary E Chodorow, Karen Fowles, Kukich, TESTING SER- VICE PRINCETON RRRE-SEARCH REPORT-EDUCATIONALand Graduate Record Examinations BoardDonald E Powers, Jill C Burstein, Martin Chodorow, Mary E Fowles, Karen Kukich, and Graduate Record Examinations Board. 2000. Comparing the validity of automated and human essay scoring. RE- SEARCH REPORT-EDUCATIONAL TESTING SER- VICE PRINCETON RR, (10).
Query-level loss functions for information retrieval. Tao Qin, Xu-Dong Zhang, Ming-Feng Tsai, De-Sheng Wang, Tie-Yan Liu, Hang Li, Inf. Process. Manage. 442Tao Qin, Xu-Dong Zhang, Ming-Feng Tsai, De-Sheng Wang, Tie-Yan Liu, and Hang Li. 2008. Query-level loss functions for information retrieval. Inf. Process. Manage., 44(2):838-855, mar.
The SMART Retrieval System-Experiments in Automatic Document Processing. G Salton, Prentice-Hall, IncUpper Saddle River, NJ, USAG. Salton. 1971. The SMART Retrieval System- Experiments in Automatic Document Processing. Prentice-Hall, Inc., Upper Saddle River, NJ, USA.
Henry Scheffe, The analysis of variance. Wiley. comHenry Scheffe. 1999. The analysis of variance, vol- ume 72. Wiley. com.
Automated essay scoring: A cross-disciplinary perspective. M D Shermis, J C Burstein, Lawrence ErlbaumM.D. Shermis and J.C. Burstein. 2002. Automated essay scoring: A cross-disciplinary perspective. Lawrence Erlbaum.
Feature-rich part-of-speech tagging with a cyclic dependency network. Kristina Toutanova, Dan Klein, D Christopher, Yoram Manning, Singer, Proceedings of the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology. the 2003 Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology1Association for Computational LinguisticsKristina Toutanova, Dan Klein, Christopher D Manning, and Yoram Singer. 2003. Feature-rich part-of-speech tagging with a cyclic dependency network. In Pro- ceedings of the 2003 Conference of the North Ameri- can Chapter of the Association for Computational Lin- guistics on Human Language Technology-Volume 1, pages 173-180. Association for Computational Lin- guistics.
Support vector method for function approximation, regression estimation, and signal processing. Vladimir Vapnik, Steven E Golowich, Alex Smola, Advances in Neural Information Processing Systems 9. MIT PressVladimir Vapnik, Steven E. Golowich, and Alex Smola. 1996. Support vector method for function approxima- tion, regression estimation, and signal processing. In Advances in Neural Information Processing Systems 9, pages 281-287. MIT Press.
A framework for implementing automated scoring. D M Williamson, Annual Meeting of the American Educational Research Association and the National Council on Measurement in Education. San Diego, CAD.M. Williamson. 2009. A framework for implement- ing automated scoring. In Annual Meeting of the American Educational Research Association and the National Council on Measurement in Education, San Diego, CA.
Ranking, boosting, and model adaptation. C J C Wu, K M Burges, J Svore, Gao, Technical reportWu, C.J.C. Burges, K.M. Svore, and J. Gao. 2008. Ranking, boosting, and model adaptation. Technical report.
A new dataset and method for automatically grading esol texts. H Yannakoudakis, T Briscoe, B Medlock, Proceedings of the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies. the 49th Annual Meeting of the Association for Computational Linguistics: Human Language Technologies1H. Yannakoudakis, T. Briscoe, and B. Medlock. 2011. A new dataset and method for automatically grading esol texts. In Proceedings of the 49th Annual Meet- ing of the Association for Computational Linguistics: Human Language Technologies, volume 1, pages 180- 189.
On using simultaneous perturbation stochastic approximation for learning to rank, and the empirical optimality of lambdarank. Yisong Yue, C Burges, MSR-TR- 2007-115Technical ReportMicrosoft ResearchYisong Yue and C Burges. 2007. On using simultane- ous perturbation stochastic approximation for learn- ing to rank, and the empirical optimality of lamb- darank. Technical report, Technical Report MSR-TR- 2007-115, Microsoft Research. |
262,671,077 | Book Reviews Graph-Based Natural Language Processing and Information Retrieval | Graphs are ubiquitous. There is hardly any domain in which objects and their relations cannot be intuitively represented as nodes and edges in a graph. Graph theory is a well-studied sub-discipline of mathematics, with a large body of results and a large number of efficient algorithms that operate on graphs. Like many other disciplines, the fields of natural language processing (NLP) and information retrieval (IR) also deal with data that can be represented as a graph. In this light, it is somewhat surprising that only in recent years the applicability of graph-theoretical frameworks to language technology became apparent and increasingly found its way into publications in the field of computational linguistics. Using algorithms that take the overall graph structure of a problem into account, rather than characteristics of single objects or (unstructured) sets of objects, graph-based methods have been shown to improve a wide range of NLP tasks. In a short but comprehensive overview of the field of graph-based methods for NLP and IR, Rada Mihalcea and Dragomir Radev list an extensive number of techniques and examples from a wide range of research papers by a large number of authors. This book provides an excellent review of this research area, and serves both as an introduction and as a survey of current graph-based techniques in NLP and IR. Because the few existing surveys in this field concentrate on particular aspects, such as graph clustering (Lancichinetti and Fortunato 2009) or IR (Liu 2006), a textbook on the topic was very much needed and this book surely fills this gap.The book is organized in four parts and contains a total of nine chapters. The first part gives an introduction to notions of graph theory, and the second part covers natural and random networks. The third part is devoted to graph-based IR, and part IV covers graph-based NLP. Chapter 1 lays the groundwork for the remainder of the book by introducing all necessary concepts in graph theory, including the notation, graph properties, and graph representations. In the second chapter, a glimpse is offered into the plethora of graph-based algorithms that have been developed independently of applications in NLP and IR. Sacrificing depth for breadth, this chapter does a great job in touching on a wide variety of methods, including minimum spanning trees, shortest-path algorithms, cuts and flows, subgraph matching, dimensionality reduction, random walks, spreading activation, and more. Algorithms are explained concisely, using examples, pseudo-code, and/or illustrations, some of which are very well suited for classroom examples. Network theory is presented in Chapter 3. The term network is here used to refer to naturally occurring relations, as opposed to graphs being generated by an automated process. After presenting the classical Erdős-Rényi random graph model and showing its inadequacy to model power-law degree distributions following Zipf's law, scale-free small-world networks are introduced. Further, | [] | Book Reviews Graph-Based Natural Language Processing and Information Retrieval
Rada Mihalcea
University of North Texas and University of Michigan
CambridgeUK
Dragomir Radev
University of North Texas and University of Michigan
CambridgeUK
Chris Biemann
Technische Universität Darmstadt
Book Reviews Graph-Based Natural Language Processing and Information Retrieval
9, $65.00 Reviewed by
Graphs are ubiquitous. There is hardly any domain in which objects and their relations cannot be intuitively represented as nodes and edges in a graph. Graph theory is a well-studied sub-discipline of mathematics, with a large body of results and a large number of efficient algorithms that operate on graphs. Like many other disciplines, the fields of natural language processing (NLP) and information retrieval (IR) also deal with data that can be represented as a graph. In this light, it is somewhat surprising that only in recent years the applicability of graph-theoretical frameworks to language technology became apparent and increasingly found its way into publications in the field of computational linguistics. Using algorithms that take the overall graph structure of a problem into account, rather than characteristics of single objects or (unstructured) sets of objects, graph-based methods have been shown to improve a wide range of NLP tasks. In a short but comprehensive overview of the field of graph-based methods for NLP and IR, Rada Mihalcea and Dragomir Radev list an extensive number of techniques and examples from a wide range of research papers by a large number of authors. This book provides an excellent review of this research area, and serves both as an introduction and as a survey of current graph-based techniques in NLP and IR. Because the few existing surveys in this field concentrate on particular aspects, such as graph clustering (Lancichinetti and Fortunato 2009) or IR (Liu 2006), a textbook on the topic was very much needed and this book surely fills this gap.The book is organized in four parts and contains a total of nine chapters. The first part gives an introduction to notions of graph theory, and the second part covers natural and random networks. The third part is devoted to graph-based IR, and part IV covers graph-based NLP. Chapter 1 lays the groundwork for the remainder of the book by introducing all necessary concepts in graph theory, including the notation, graph properties, and graph representations. In the second chapter, a glimpse is offered into the plethora of graph-based algorithms that have been developed independently of applications in NLP and IR. Sacrificing depth for breadth, this chapter does a great job in touching on a wide variety of methods, including minimum spanning trees, shortest-path algorithms, cuts and flows, subgraph matching, dimensionality reduction, random walks, spreading activation, and more. Algorithms are explained concisely, using examples, pseudo-code, and/or illustrations, some of which are very well suited for classroom examples. Network theory is presented in Chapter 3. The term network is here used to refer to naturally occurring relations, as opposed to graphs being generated by an automated process. After presenting the classical Erdős-Rényi random graph model and showing its inadequacy to model power-law degree distributions following Zipf's law, scale-free small-world networks are introduced. Further,
several centrality measures, as well as other topics in network theory, are defined and exemplified.
Establishing the connection to NLP, Chapter 4 introduces networks constructed from natural language. Co-occurrence networks and syntactic dependency networks are examined quantitatively. Results on the structure of semantic networks such as WordNet are presented, as well as a range of similarity networks between lexical units. This chapter will surely inspire the reader to watch out for networks in his/her own data. Chapter 5 turns to link analysis for the Web. The PageRank algorithm is described at length, variants for undirected and weighted graphs are introduced, and the algorithm's application to topic-sensitive analysis and query-dependent link analysis is discussed. This chapter is the only one that touches on core IR, and this is also the only chapter with content that can be found in other textbooks (e.g., Liu 2011). Still, this chapter is an important prerequisite for the chapter on applications. It would have been possible to move the description of the algorithms to Chapter 2, however, omitting this part.
The topic of Chapter 6 is text clustering with graph-based methods, outlining the Fiedler method, the Kernighan-Lin method, min-cut clustering, betweenness, and random walk clustering. After defining measures on cluster quality for graphs, spectral and non-spectral graph clustering methods are briefly introduced. Most of the chapter is to be understood as a presentation of general graph clustering methods rather than their application to language. For this, some representative methods for different core ideas were selected. Part IV on graph-based NLP contains the chapters probably most interesting to readers working in computational linguistics. In Chapter 7, graph-based methods for lexical semantics are presented, including detection of semantic classes, synonym detection using random walks on semantic networks, semantic distance on WordNet, and textual entailment using graph matching. Methods for word sense and name disambiguation with graph clustering and random walks are described. The chapter closes with graph-based methods for sentiment lexicon construction and subjectivity classification.
Graph-based methods for syntactic processing are presented in Chapter 8: an unsupervised part-of-speech tagging algorithm based on graph clustering, minimum spanning trees for dependency parsing, PP-attachment with random walks over syntactic co-occurrence graphs, and coreference resolution with graph cuts. In the final chapter, many of the algorithms introduced in the previous chapters are applied to NLP applications as diverse as summarization, passage retrieval, keyword extraction, topic identification and segmentation, discourse, machine translation, cross-language IR, term weighting, and question answering.
As someone with a background in graph-based NLP, I enjoyed reading this book. The writing style is concise and clear, and the authors succeed in conveying the most important points from an incredibly large number of works, viewed from the graphbased perspective. I also liked the extensive use of examples-throughout, almost half of the space is used for figures and tables illustrating the methods, which some readers might perceive as unbalanced, however. With just under 200 pages and a topic as broad as this, it necessarily follows that many of the presented methods are exemplified and touched upon rather than discussed in great detail. Although this sometimes leads to the situation that some passages can only be understood with background knowledge, it is noteworthy that every chapter includes a section on further reading. In this way, the book serves as an entry point to a deeper engagement with graph-based methods for NLP and IR, and it encourages readers to see their NLP problem from a graph-based view.
For a future edition, however, I have a few wishes: It would be nice if the figures and examples were less detached from the text and explained more thoroughly. At times, it would be helpful to present deeper insights and to connect the methodologies, rather than just presenting them next to each other. Also, some of the definitions in Chapter 2 could be less confusing and structured better.
Because this book emphasizes graph-based aspects for language processing rather than aiming at exhaustively treating the numerous tasks that benefit from graph-based methods, it cannot replace a general introduction to NLP or IR: For students without prior knowledge in NLP and IR, a more guided and focused approach to the topic would be required. The target audience is, rather, NLP researchers and professionals who want to add the graph-based view to their arsenal of methods, and to become inspired by this rapidly growing research area. It is equally suited for people working in graph algorithms to learn about graphs in language as a field of application for their work. I will surely consult this volume in the future to supplement the preparation of lectures because of its comprehensive references and its richness in examples.
Community detection algorithms: A comparative analysis. Andrea Lancichinetti, Santo Fortunato, Physical Review E. 8056117Lancichinetti, Andrea and Santo Fortunato. 2009. Community detection algorithms: A comparative analysis. Physical Review E, 80:056117.
Bing Liu, Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data. second editionLiu, Bing. 2011. Web Data Mining: Exploring Hyperlinks, Contents, and Usage Data (second edition).
. Springer Berlin, Berlin, Springer.
Graph-based learning models for information retrieval: A survey. Yi Liu, Liu, Yi. 2006. Graph-based learning models for information retrieval: A survey. Available at: www.cse.msu. edu/∼rongjin/semisupervised/ graph.pdf.
His current research interests include statistical semantics, graphbased methods for unsupervised acquisition, and topic modeling. Computer Science Department, Hochschulstr. 1064289Chris Biemann is Juniorprofessor (assistant professor) for Language Technology at Darmstadt University of TechnologyBiemann's address is UKP lab. e-mail: biemann@tk.informatik.tu-darmstadt.deChris Biemann is Juniorprofessor (assistant professor) for Language Technology at Darmstadt University of Technology. His current research interests include statistical semantics, graph- based methods for unsupervised acquisition, and topic modeling. Biemann's address is UKP lab, Computer Science Department, Hochschulstr. 10, 64289 Darmstadt, Germany; e-mail: biemann@tk.informatik.tu-darmstadt.de. |
43,345,625 | SRA PARTICIPATION IN TIPSTER PHASE II | [] | SRA PARTICIPATION IN TIPSTER PHASE II
Lisa Rau lisa_rau@sra.com
SRA International, Inc
4300 Fair Lakes Court Fairfax22033VA
SRA PARTICIPATION IN TIPSTER PHASE II
(703) 803-1851
I. INTRODUCTION
SRA, although not a research contractor under Tipster Phase II, nonetheless actively participated in the program in a variety of ways. Some of the activities include:
Participation in the Common Pattern Specification Language subgroup
Our current activities include support for the data extraction components of the ISX-led ISLE/InfoTech Tipster application.
II. RESEARCH PROGRAM HIGHLIGHTS
The application of machine learning techniques to reduce the customization time for data extraction systems
The use of multilingual data extraction for targeted machine translation or gisting Fusion of data extraction technology with other media (speech; video) for multimedia fusion applications
lII. EVALUATIONS
SRA participated in the Sixth Message
Understanding Conference in three of the four areas (Named Entity, Template Element, and Scenario Template). SRA participated in the Multilingual Entity Task (MET) in both Japanese and Spanish.
IV. PHASE III PLANS
On the research front, as a Tipster Phase III contractor, SRA intends to make significant advances in the customizability of extraction technology, primarily by bringing together our expertise in machine learning and natural language processing.
A second focus of our Phase III research plans involve the intelligent summarization of texts.
Finally, we intend to continue to contribute to the Contractor Architecture Working Group and other common efforts, and complete the Tipster application systems.
SRA engages in an active program of research and operational support in both multilingual data extraction, natural language processing more generally, and text retrieval integration. Our current research thrusts involve:
Preparation and distribution of the Named Entity Tagging and Discourse Tagging Tools for theSixth Message Understanding Conference (MUC6) and Multilingual Entity Task (MET) Implementation / Award of two Tipster application prototype systems: Bluesky and USACOM Sole contractor on two Tipster-affiliated systems, the Intelligence Analysts Associate (IAA) under funding from Rome Laboratories for NAIC, and the Overture program for OIR.Participation in the Contractor Architecture
Working Group (CAWG) |
|
14,586,860 | Speech-Enabled Computer-Aided Translation: A Satisfaction Survey with Post-Editor Trainees | The present study has surveyed post-editor trainees' views and attitudes before and after the introduction of speech technology as a front end to a computer-aided translation workbench. The aim of the survey was (i) to identify attitudes and perceptions among post-editor trainees before performing a post-editing task using automatic speech recognition (ASR); and (ii) to assess the degree to which post-editors' attitudes and expectations to the use of speech technology changed after actually using it. The survey was based on two questionnaires: the first one administered before the participants performed with the ASR system and the second one at the end of the session, once they have actually used ASR while post-editing machine translation outputs. Overall, the results suggest that the surveyed posteditor trainees tended to report a positive view of ASR in the context of post-editing and they would consider adopting ASR as an input method for future post-editing tasks. | [] | Speech-Enabled Computer-Aided Translation: A Satisfaction Survey with Post-Editor Trainees
Association for Computational LinguisticsCopyright Association for Computational Linguistics26 April 2014. 2014
Bartolomé Mesa-Lao bm.ibc@cbs.dk
Center for Research and Innovation in Translation and Translation Technology Department of International Business Communication Copenhagen Business School
Denmark
Speech-Enabled Computer-Aided Translation: A Satisfaction Survey with Post-Editor Trainees
Workshop on Humans and Computer-assisted Translation
Gothenburg, SwedenAssociation for Computational Linguistics26 April 2014. 2014
The present study has surveyed post-editor trainees' views and attitudes before and after the introduction of speech technology as a front end to a computer-aided translation workbench. The aim of the survey was (i) to identify attitudes and perceptions among post-editor trainees before performing a post-editing task using automatic speech recognition (ASR); and (ii) to assess the degree to which post-editors' attitudes and expectations to the use of speech technology changed after actually using it. The survey was based on two questionnaires: the first one administered before the participants performed with the ASR system and the second one at the end of the session, once they have actually used ASR while post-editing machine translation outputs. Overall, the results suggest that the surveyed posteditor trainees tended to report a positive view of ASR in the context of post-editing and they would consider adopting ASR as an input method for future post-editing tasks.
Introduction
In recent years, significant progress has been made in advancing automatic speech recognition (ASR) technology. Nowadays it can be found at the other end of customer-support hotlines, it is built into operating systems and it is offered as an alternative text-input method in many mobile devices. This technology is not only improving at a steady pace, but is also becoming increasingly usable and useful.
At the same time, the translation industry is going through a societal and technological change in its evolution. In less than ten years, the industry is considering new tools, workflows and solutions to service a steadily growing market. Given the significant improvements in machine translation (MT) quality and the increasing demand for translations, post-editing of MT is becoming a well-accepted practice in the translation industry, since it has been shown to allow for larger volumes of translations to be produced saving time and costs.
Against this background, it seems reasonable to envisage an era of converge in the future years where speech technology can make a difference in the field of translation technologies. As postediting services are becoming a common practice among language service providers and ASR is gaining momentum, it seems reasonable to explore the interplay between both fields to create new business solutions and workflows.
In the context of machine-aided human translation and human-aided machine translation, different scenarios have been investigated where human translators are brought into the loop interacting with a computer through a variety of input modalities to improve the efficiency and accuracy of the translation process (e.g., Dragsted et al. 2011, Toselli et al. 2011, Vidal 2006. ASR systems have the potential to improve the productivity and comfort of performing computer-based tasks for a wide variety of users, allowing them to enter both text and commands into the computer using just their voice. However, further studies need to be conducted to build up new knowledge about the way in which state-of-the-art ASR software can be applied to one of the most common tasks translators face nowadays, i.e. post-editing of MT outputs.
The present study has two related objectives: First, to report on a satisfaction survey with posteditor trainees after showing them how to use ASR in post-editing tasks. Second, based on the feedback provided by the participants, to assess the change in users' expectations and acceptance of ASR technology as an alternative input method for their daily work.
Method
In this study, we explore the potential of combining one of the most popular computeraided translation workbenches in the market (i.e. memoQ) with one of the most well-known ASR packages (i.e. Dragon Naturally Speaking from Nuance).
Overview
Two questionnaires were developed and deployed as a survey. The survey was divided into two phases, a prospective phase in which we surveyed post-editor trainees' views and expectations toward ASR and a subsequent retrospective phase in which actual post-editor's experiences and satisfaction with the technology were surveyed. Participants had to answer a 10item questionnaire in the prospective phase and a 7-item questionnaire in the retrospective phase. These two questionnaires partially overlapped, allowing us to compare, for each participant, the answers given before and after the introduction and use of the target technology.
Participants profile
Participants were recruited through the Universitat Autònoma de Barcelona (Spain). The group included 11 females and 4 males, ranging in age from 22 to 35. All 15 participants had a full degree in Translation and Interpreting Studies and were regular users of computer-aided translation software (mainly memoQ and SDL Trados Studio). All of them had already performed MT post-editing tasks as part of their previous training as translators and, at the moment of the data collection, they were also taking a 12-hour course on post-editing as part of their master's degree in Translation. None of the participants had ever user Dragon Naturally Speaking, but four participants declared to have tried the speech input options in their mobile phones to dictate text messages.
Procedure
Individual sessions occurred at a university office. In the first part of the session, each participant had to complete an on-line questionnaire. This initial survey covered the following topics:
1. General information about their profile as translators; including education, years of experience and employment status. In the second part of the session, after the initial questionnaire was completed, all participants performed two post-editing tasks under the following two input conditions (one each):
Condition 1: non-ASR input modality, i.e. keyboard and mouse. Condition 2: ASR input modality combined with other non-ASR modalities, i.e. keyboard and mouse.
The language pair involved in the tasks was Spanish to English 1 . Two different texts from the domain of mobile phone marketing were used to perform the post-editing tasks under condition 1 and 2. These two texts were imported to a memoQ project and then fully pre-translated using MT coming from the Google API plug-in in memoQ. The order of the two input conditions and the two texts in each condition were counterbalanced across participants.
In an attempt to unify post-editing criteria among participants, all of them were instructed to follow the same post-editing guidelines aiming at a final high-quality target text 2 . In the ASR input condition, participants also read in hard copy the most frequent commands in Dragon Naturally Speaking v.10 that they could use to post-edit using ASR (Select <w>, Scratch that, Cut that, etc.). All of them had to do the basic training tutorial included in the software (5 minutes training on average per participant) in order to improve the recognition accuracy. Following the training, participants also had the chance to practice the dictation of text and commands before actually performing the two post-editing tasks.
In the third part of the session, participants completed a 7-item post-session questionnaire regarding their opinions about ASR while postediting.
Data collection and analysis
Survey data
For questionnaires' data, responses to quantitative items were entered into a spreadsheet and mean responses were calculated across participants. For a comparison of responses to different survey items, paired statistics were used: paired t-test for items coded as ordinal variables, and chi-square test for items coded as categorical variables. The questionnaires did not include open-ended questions or comments.
Task log files
For task performance data (which is not going to be elaborated in this paper), computer screen including audio was recorded using BB FlashBack Recorder Pro v. 2.8 from Blueberry Software. With the use of the video recordings, a time-stamped log of user actions and ASR system responses was produced for each participant. Each user action was coded for the following: (i) input method involved; (ii) for the post-editing task involving ASR, text entry rate in the form of text or commands, and (iii), for the same task, which method of error correction was used.
Satisfaction data
Responses to the post-session questionnaire were entered and averaged. We computed an overall ASR "satisfaction score" for each participant by summing the responses to the seven items that related to satisfaction with ASR. We computed a 95 percent confidence interval (CI) for the mean of the satisfaction score to create bounded estimated for the satisfaction score.
Survey results
Usage of speech input method
To determine why participants would decide to use ASR in the future to post-edit, we asked them to rate the importance of eight different reasons, on a scale of 1 to 7, with 7 being the highest in importance. The top reason for deciding to use ASR was that it would involve less fatigue (Table 1).
Reasons for using speech input method
Mean 95% CI Table 1: Importance of reasons for using automatic speech recognition (ASR), rated on a scale from 1 to 7.
Usage of non-speech input methods
Since none of the participants had ever used ASR to perform any of their translation or post-editing assignments before, and in order to understand the relative usage data, we also asked participants about their reasons for choosing nonspeech input methods (i.e. keyboard and mouse). For this end, they rated the importance of six reasons on a scale of 1 to 7, with 7 being most important. In the introductory questionnaire, most participants believed that keyboard shortcuts would be quicker and easier than using spoken commands (Table 2).
Reasons for using nonspeech input methods
Mean 95% CI They are easier 6.5* 5.7, 6.8 Less setup involved 6.1* 5.5, 6.3 Frustration with speech 5.9* 5.2, 6. Having to train the system (setup involved) in order to improve recognition accuracy or donning a headset for dictating was initially perceived as a barrier for using ASR as the preferred input method. According to the survey, participants would also choose other input methods when ASR performed poorly or not at all, either in general or for dictating particular commands (e.g., for some participants the command Cut that was consistently recognized as Cap that). Less important reasons were the need to rest one's voice or to switch methods just for variety.
Opinions about speech and non-speech input methods
Participants rated their satisfaction with 10 usability indicators for both ASR and non-ASR alternatives (Tables 3 and 4). Table 4: Percentage of participants who disliked particular aspects of the automatic speech recognition (ASR) system and non-speech input methods.
Likes
ASR for translator-computer interaction succeeds at easing the task (its most-liked benefit). Almost 75% liked the speed they archived with ASR, despite being slower when compared against non-ASR input methods. Almost 74% liked the effort required to use ASR, and only 17.3% found it fatiguing. Participant's largest complaint with ASR was related to recognition accuracy. Only 52.7% liked the recognition accuracy they achieved and fixing recognition mistakes ranked as the top dislike at 74.5%. The second most frequent dislike was potential work environment dissonance or loss of privacy during use of ASR at 45.9% of participants.
Ratings show significant differences between ASR and non-speech input methods, particularly with regard to accuracy and amusement involved (Fun item in the questionnaire).
Post-session questionnaire results
To further examine subjective opinions of ASR in post-editing compared to non-speech input methods, we asked participants to rate their agreement to several statements regarding learnability, ease of use, reliability and fun after performing the post-editing tasks under the two conditions. Agreement was rated on a scale of 1 to 7, from "strongly disagree" to "strongly agree". Table 5 shows participants' level of agreement with the seven statements in the postsession questionnaire.
Statement
Level of agreement
Mean 95% CI 1. I expected using ASR in postediting to be more difficult than it actually is.
6.6* 6.5, 6.8
2. My performance with the selection of ASR commands improved by the end of the session.
6.5* 5.4, 6.9 3. The system correctly recognizes almost every command I dictate.
5.9* 5.5, 6.4
4. It is difficult to correct errors made by the ASR software.
2.9 2.3, 4.1 5. Using ASR in the context of post-editing can be a frustrating experience.
2.4 1.9, 3.8 6. I can enter text more accurately with ASR than with any other method.
2.1 1.7, 2.9 7. I was tired by the end of the session.
1.7 1.2, 2.9 * Agreement significantly greater than neutral rating of 4.0 (p < 0.05) Table 5: Participants' level of agreement to statements about ASR input method in post-editing tasks. Ratings are on scale 1 to 7, from "strong disagree" to "strongly agree", with 4.0 representing neutral rating.
The results of the post-session questionnaire show that participants had significantly greater than neutral agreement (positively) about ASR in the context of post-editing. Overall they agreed that it is easier to use ASR for post-editing purposes than they actually thought. They also positively agreed that the ASR software was able to recognize almost every command they dictated (i.e. Select <w>, Scratch that, etc.) and acknowledged that their performance when dictating commands was better as they became more familiar with the task.
When scores were combined for the seven statements into an overall satisfaction score, the average was 73.5 [66.3, 87.4], on a scale of 0 to 100 3 . Thus, this average is significantly more positive than neutral. 12 out of the 15 surveyed participants stated that they will definitely consider adopting ASR in combination with nonspeech input modalities in their daily practice as professional translators.
Discussion
The results of the present study show that the surveyed post-editor trainees tended to report a very positive view on the use of ASR in the context of post-editing. In general, findings suggest that human translators would not regret the integration of ASR as one of the possible input methods for performing post-editing tasks.
While many questions regarding effective use of ASR remain, this study provides some basis for further efforts to better integrate ASR in the context of computer-aided translation. Some specific insights supported by the collected data are:
Expectations about ASR were definitely more positive after having performed with speech as an input method. Participants positively agreed that it is easier and more effective than previously thought. Most of the challenges (dislikes) of ASR when compared to other non-input methods can be tacked if the user is provided with both ASR and non-ASR input methods for them to be used at their convenience. Participants' views seem to indicate that they would use ASR as a complement rather than a substitute for non-speech input methods.
Conclusions
Post-editor trainees have a positive view of ASR when combining traditional non-speech input methods (i.e. keyboard and mouse) with the use of speech. Acknowledging this up front, an interesting field for future work is to introduce proper training on correction strategies. Studies in this direction could help to investigate how training post-editors to apply optimal correction strategies can help them to increase performance and, consequently, user satisfaction.
Table 2 :
2Importanceof reasons for choosing non-
speech input methods instead of automatic speech
recognition, rated on a scale from 1 to 7.
Table 3 :
3Percentage of participants who liked particular aspects of the automatic speech recognition (ASR) system and non-speech input methods.Dislikes
% responding yes
ASR Non-ASR
Fixing recognition mistakes
74.5
Disturbs colleagues
45.9
Setup involved
36.8
Fatigue
17.3
12.7
Participants performed from L1 to L2. 2 The post-editing guidelines distributed in hard copy were: i) Retain as much raw MT as possible; ii) Do not introduce stylistic changes; iii) Make corrections only where absolutely necessary, i.e. correct words and phrases that are clearly wrong, inadequate or ambiguous according to English grammar; iv) Make sure there are no mistranslations with regard to the Spanish source text; v) Publishable quality is expected.
A score of 100 represents a strong agreement with all positive statements and a strong disagreement with all negative statements, while a score of 50 represents a neutral response to all statements.
AcknowledgmentsWe would like to thank all the participants in this study for their generous contributions of time, effort and insights.
Speaking your translation: students' first encounter with speech recognition technology. B Dragsted, I M Mees, I Gorm Hansen, Translation & Interpreting. 31Dragsted, B., Mees, I. M., Gorm Hansen, I. 2011. Speaking your translation: students' first encounter with speech recognition technology, Translation & Interpreting, Vol 3(1).
Towards an automatic dictation system for translators: the TransTalk project. M Dymetman, J Brousseau, G Foster, P Isabelle, Y Normandin, P Plamondon, Proceedings of the international conference on spoken language processing. the international conference on spoken language processingICSLP 94Dymetman,M., Brousseau, J., Foster, G., Isabelle, P., Normandin, Y., & Plamondon, P. 1994. Towards an automatic dictation system for translators: the TransTalk project. Proceedings of the international conference on spoken language processing (ICSLP 94), 691-694.
Usage, performance, and satisfaction outcomes for experienced users of automatic speech recognition. H H Koester, Journal of Rehabilitation Research & Development. 415Koester, HH. 2004. Usage, performance, and satisfaction outcomes for experienced users of automatic speech recognition. Journal of Rehabilitation Research & Development. Vol 41(5): 739-754.
Translation as human-computer interaction. S O'brien, Translation Spaces. 11O'Brien, S. 2012. Translation as human-computer interaction. Translation Spaces, 1(1), 101-122.
A Toselli, E Vidal, F Casacuberta, Multimodal Interactive Pattern Recognition and Applications. SpringerToselli, A., Vidal, E., Casacuberta, F. 2011. Multimodal Interactive Pattern Recognition and Applications. Springer.
Computer-Assisted Translation Using Speech Recognition. E Vidal, F Casacuberta, L Rodríguez, J Civera, C D Martínez-Hinarejos, IEEE Transactions on Audio, Speech, and Language Processing. 143Vidal, E., Casacuberta, F., Rodríguez, L., Civera, J., Martínez-Hinarejos. C.D. 2006. Computer-Assisted Translation Using Speech Recognition. IEEE Transactions on Audio, Speech, and Language Processing, 14(3): 941-951. |
14,594,693 | Reading behavior predicts syntactic categories | It is well-known that readers are less likely to fixate their gaze on closed class syntactic categories such as prepositions and pronouns. This paper investigates to what extent the syntactic category of a word in context can be predicted from gaze features obtained using eye-tracking equipment. If syntax can be reliably predicted from eye movements of readers, it can speed up linguistic annotation substantially, since reading is considerably faster than doing linguistic annotation by hand. Our results show that gaze features do discriminate between most pairs of syntactic categories, and we show how we can use this to annotate words with part of speech across domains, when tag dictionaries enable us to narrow down the set of potential categories. | [
18846560,
14760908,
629094
] | Reading behavior predicts syntactic categories
Association for Computational LinguisticsCopyright Association for Computational LinguisticsJuly 30-31, 2015. 2015
Maria Barrett
University of Copenhagen Njalsgade
140DK-2300Copenhagen S
Anders Søgaard soegaard@hum.ku.dk
University of Copenhagen Njalsgade
140DK-2300Copenhagen S
Reading behavior predicts syntactic categories
Proceedings of the 19th Conference on Computational Language Learning
the 19th Conference on Computational Language LearningBeijing, ChinaAssociation for Computational LinguisticsJuly 30-31, 2015. 2015
It is well-known that readers are less likely to fixate their gaze on closed class syntactic categories such as prepositions and pronouns. This paper investigates to what extent the syntactic category of a word in context can be predicted from gaze features obtained using eye-tracking equipment. If syntax can be reliably predicted from eye movements of readers, it can speed up linguistic annotation substantially, since reading is considerably faster than doing linguistic annotation by hand. Our results show that gaze features do discriminate between most pairs of syntactic categories, and we show how we can use this to annotate words with part of speech across domains, when tag dictionaries enable us to narrow down the set of potential categories.
Introduction
Eye movements during reading is a wellestablished proxy for cognitive processing, and it is well-known that readers are more likely to fixate on words from open syntactic categories (verbs, nouns, adjectives) than on closed category items like prepositions and conjunctions (Rayner, 1998;Nilsson and Nivre, 2009). Generally, readers seem to be most likely to fixate and re-fixate on nouns (Furtner et al., 2009). If reading behavior is affected by syntactic category, maybe reading behavior can, conversely, also tell us about the syntax of words in context. This paper investigates to what extent gaze data can be used to predict syntactic categories. We show that gaze data can effectively be used to discriminate between a wide range of part of speech (POS) pairs, and gaze data can therefore be used to significantly improve type-constrained POS taggers. This is potentially useful, since eye-tracking data becomes more and more readily available with the emergence of eye trackers in mainstream consumer products (San Agustin et al., 2010). With the development of robust eye-tracking in laptops, it is easy to imagine digital text providers storing gaze data, which could then be used to improve automated analysis of their publications. Contributions We are, to the best of our knowledge, the first to study reading behavior of syntactically annotated, natural text across domains, and how gaze correlates with a complete set of syntactic categories. We use logistic regression to show that gaze features discriminate between POS pairs, even across domains. We then show how gaze features can improve a cross-domain supervised POS tagger. We show that gaze-based predictions are robust, not only across domains, but also across subjects.
Experiment
In our experiment, 10 subjects read syntactically annotated sentences from five domains.
Data The data consists of 250 sentences: 50 sentences (min. 3 tokens, max. 120 characters), randomly sampled from each of five different, manually annotated corpora: Wall Street Journal articles (WSJ), Wall Street Journal headlines (HDL), emails (MAI), weblogs (WBL), and Twitter (TWI). WSJ and HDL syntactically annotated sentences come from the OntoNotes 4.0 release of the English Penn Treebank. 1 The MAI and WBL sections come from the English Web Treebank. 2 The TWI data comes from the work of Foster et al. (2011). We mapped the gold labels to the 12 Universal POS (Petrov et al., 2011), but discarded the category X due to data sparsity. Experimental design The 250 items were read by all 10 participants, but participants read the items in one of five randomized orders. Neither the source domain for the sentence, nor the POS tags were revealed to the participant at any time. One sentence was presented at a time in black on a light gray background. Font face was Verdana and font size was 25 pixels. Sentences were centered vertically, and all sentences could fit into one line. All sentences were preceded by a fixation cross. The experiment was self-paced. To switch to a new sentence and to ensure that the sentence was actually processed by the participant, participants rated the immediate interest towards the sentence on a scale from 1-6 by pressing the corresponding number on the numeric keypad. Participants were instructed to read and continue to the next sentence as quickly as possible. The actual experiment was preceded by 25 practice sentences to familiarize the participant with the experimental setup.
Our apparatus was a Tobii X120 eye tracker with a 15" monitor. Sampling rate was 120 Hz binocular. Participants were seated on a chair approximately 65 cm from the display. We recruited 10 participants (7 male, mean age 31.30 ±4.74)) from campus. All were native English speakers. Their vision was normal or corrected to normal, and none were diagnosed with dyslexia. All were skilled readers. Minimum educational level was an ongoing MA. Each session lasted around 40 minutes. One participant had no fixations on a few sentences. We believe that erroneous key strokes caused the participant to skip a few sentences.
Features There are many different features for exploring cognitive load during reading (Rayner, 1998). We extracted a broad selection of cognitive effort features from the raw eye-tracking data in order to determine which are more fit for the task. The features are inspired by Salojärvi et al. (2003), who used a similarly exploratory approach. We wanted to cover both oculomotor features, such as fixations on previous and subsequent words, and measures relating to early (e.g. first fixation duration) and late processing (e.g. regression destinations / departure points and total fixation time). We also included reading speed and reading depth features, such as fixation probability and total fixation time per word. In total, we have 32 gaze features, where some are highly correlated (such as number of fixations on a word and total fixation time per sentence).
Dundee Corpus
The main weakness of the experiment is the small dataset. As future work, we plan to replicate the experiment with a $99 eye tracker for subjects to use at home. This will make it easy to collect thousands of sentences, leading to more robust gaze-based POS models. Here, instead, we include an experiment with the Dundee corpus (Kennedy and Pynte, 2005). The Dundee corpus is a widely used dataset in research on reading and consists of gaze data for 10 subjects reading 20 newswire articles (about 51,000 words). We extracted the same word-based features as above, except probability for 1st and 2nd fixation, and sentence-level features (in the Dundee corpus, subjects are exposed to multiple sentences per screen window), and used them as features in our POS tagging experiments ( §3).
Learning experiments In our experiments, we used type-constrained logistic regression with L2-regularization and type-constrained (averaged) structured perceptron (Collins, 2002;Täckström et al., 2013). In all experiments, unless otherwise stated, we trained our models on four domains and evaluated on the fifth to avoid over-fitting to the First fixation duration on every word 9.1 5
Previous fixation duration 7.0 6
Mean fixation duration per word 6.6 7
Re-read prob 5.7 8
Next fixation duration 2.0 9
Total fixation duration per word 2.0 characteristics of a specific domain. Our tag dictionary is from Wiktionary 3 and covers 95% of all tokens.
Results
Domain differences Our first observation is that the gaze characteristics differ slightly across domains, but more across POS. Figure While the overall pattern is similar across the five domains (open category items are more likely to be fixated), we see domain differences. For example, pronouns are more likely to be fixated in headlines. The explanation could lie in the different distributions of function words and content words. It is established and unchallenged that function words are fixated on about 35% of the time and content words are fixated on about 85% of the time (Rayner and Duffy, 1988). In our data, these numbers vary among the domains according to frequency of that word class, see Figure 2. Figure 2a shows that there is a strong linear correlation between content word frequency and content word fixation probability among the different domains: Pearson's ρ = 0.909. From Figure 2b, there is a negative correlation between function word frequency and function word fixation probability: Pearson's ρ = −0.702. Predictive gaze features To investigate which gaze features were more predictive of part of speech, we used stability selection (Meinshausen and Bühlmann, 2010) with logistic regression classification on all binary POS classifications. Fixation probability was the most informative feature, but also whether the words around the word is fixated is important along with number of fixations. In our binary discrimination and POS tagging experiments, using L2-regularization or averaging with all features was superior (on Twitter data) to using stability selection for feature selection. We also asked a psycholinguist to select a small set of relatively independent gaze features fit for the task (first fixation duration, fixation probability and re-read probability), but again, using all features with L2-regularization led to better performance on the Twitter data. Binary discrimination First, we trained L2regularized logistic regression models to discriminate between all pairs of POS tags only using gaze features. In other words, for example we selected all words annotated as NOUN or VERB, and trained a logistic regression model to discriminate between the two in a five-fold cross validation setup. We report error reduction acc−baseline 1−baseline in Figure 3. POS tagging We also tried evaluating our gaze features directly in a supervised POS tagger. 4 Owoputi et al. (2013)) augmented with the above gaze features. The POS tagger was trained on a very small seed of data (200 sentences), doing 20 passes over the data, and evaluated on out-ofdomain test data; training on four domains, testing on one. For the gaze features, instead of using token gaze features, we first built a lexicon with average word type statistics from the training data. We normalize the gaze matrix by dividing with its standard deviation. This is the normalizer in Turian et al. (2010) with σ = 1.0. We condition on the gaze features of the current word, only. We compare performance using gaze features to using only word frequency, estimating from the (unlabeled) English Web Treebank corpus, and word length (FREQLEN).
The first three columns in Table 2 show, that gaze features help POS tagging, at least when trained on very small seeds of data. Error reduction using gaze features from the Dundee corpus (DGAZE) is 12%. We know that gaze features correlate with word frequency and word length, but using these features directly leads to much smaller performance gains. Concatenating the two features sets leads to the best performance, with an error reduction of 16%.
In follow-up experiments, we observe that averaging over 10 subjects when collecting gaze features does not seem as important as we expected. Tagging accuracies on raw (non-averaged) data are only about 1% lower. Finally, we also tried running logistic regression experiments across subjects rather than domains. Here, tagging accuracies were again comparable to our set-up, suggesting that gaze features are also robust across subjects. Matthies and Søgaard (2013) present results that suggest that individual variation among (academically trained) subjects' reading behavior was not a greater source of error than variation within subjects, showing that it is possible to predict fixations across readers. Our work relates to such work, studying the robustness of reading models across domains and readers, but it also relates in spirit to research on using weak supervision in NLP, e.g., work on using HTML markup to improve dependency parsers (Spitkovsky, 2013) or using clickthrough data to improve POS taggers (Ganchev et al., 2012).
Related work
Conclusions
We have shown that it is possible to use gaze features to discriminate between many POS pairs across domains, even with only a small dataset and a small set of subjects. We also showed that gaze features can improve the performance of a POS tagger trained on small seeds of data.
Figure 1 :
1Fixation probability boxplots across five domains
Figure 2 :
2Scatter plot of frequency and fixation probability for content words (NOUN, VERB, ADJ, NUM) and function words (PRON, CONJ, ADP, DET, PRT)
Figure 3 :
3Error reduction of logistic regression over a majority baseline. All domains fixation probabilities across the 11 parts of speech.
Table 1 :
110 most used features by stability selec-
tion from logistic regression classification of all
POS pairs on all domains, 5-fold cross validation.
0.50
0.55
0.60
0.65
Frequency
0.68
0.70
0.72
0.74
0.76
0.78
Fix. prob.
HDL
MAI
TWI
WBL
WSJ
(a) Content words
0.15
0.20
0.25
0.30
Frequency
0.42
0.44
0.46
0.48
0.50
Fix. prob.
HDL
MAI
TWI
WBL
WSJ
(b) Function words
Table 2 :
2POS tagging results on different test sets using 200 out-of-domain sentences for training.
DGAZE is using gaze features from Dundee. Best result for each row in bold face
trained a type-constrained (averaged) perceptron
model with drop-out and a standard feature model
(from
catalog.ldc.upenn.edu/LDC2011T03 2 catalog.ldc.upenn.edu/LDC2012T13
https://github.com/coastalcph/ rungsted
Discriminative training methods for Hidden Markov Models. Michael Collins, EMNLP. Michael Collins. 2002. Discriminative training meth- ods for Hidden Markov Models. In EMNLP.
From news to comments: Resources and benchmarks for parsing the language of Web 2.0. Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Josef Le Roux, Joakim Nivre, Deirde Hogan, Josef Van Genabith, IJCNLP. Jennifer Foster, Ozlem Cetinoglu, Joachim Wagner, Josef Le Roux, Joakim Nivre, Deirde Hogan, and Josef van Genabith. 2011. From news to comments: Resources and benchmarks for parsing the language of Web 2.0. In IJCNLP.
Nomen est omen: Investigating the dominance of nouns in word comprehension with eye movement analyses. Marco R Furtner, Pierre John F Rauthmann, Sachse, Advances in Cognitive Psychology. 591Marco R Furtner, John F Rauthmann, and Pierre Sachse. 2009. Nomen est omen: Investigating the dominance of nouns in word comprehension with eye movement analyses. Advances in Cognitive Psy- chology, 5:91.
Using search-logs to improve query tagging. Kuzman Ganchev, Keith Hall, Ryan Mcdonald, Slav Petrov, ACL. Kuzman Ganchev, Keith Hall, Ryan McDonald, and Slav Petrov. 2012. Using search-logs to improve query tagging. In ACL.
Parafoveal-onfoveal effects in normal reading. Alan Kennedy, Joël Pynte, Vision research. 452Alan Kennedy and Joël Pynte. 2005. Parafoveal-on- foveal effects in normal reading. Vision research, 45(2):153-168.
With blinkers on: Robust prediction of eye movements across readers. Franz Matthies, Anders Søgaard, EMNLP. Seattle, Washington, USAFranz Matthies and Anders Søgaard. 2013. With blinkers on: Robust prediction of eye movements across readers. In EMNLP, Seattle, Washington, USA.
Stability selection. Nicolai Meinshausen, Peter Bühlmann, Journal of the Royal Statistical Society: Series B (Statistical Methodology). 724Nicolai Meinshausen and Peter Bühlmann. 2010. Stability selection. Journal of the Royal Statis- tical Society: Series B (Statistical Methodology), 72(4):417-473.
Learning where to look: Modeling eye movements in reading. Matthias Nilsson, Joakim Nivre, CoNLL. Matthias Nilsson and Joakim Nivre. 2009. Learning where to look: Modeling eye movements in reading. In CoNLL.
Improved part-of-speech tagging for online conversational text with word clusters. Olutobi Owoputi, O' Brendan, Chris Connor, Kevin Dyer, Nathan Gimpel, Noah A Schneider, Smith, NAACL. Olutobi Owoputi, Brendan O'Connor, Chris Dyer, Kevin Gimpel, Nathan Schneider, and Noah A Smith. 2013. Improved part-of-speech tagging for online conversational text with word clusters. In NAACL.
A universal part-of-speech tagset. Slav Petrov, Dipanjan Das, Ryan Mcdonald, arXiv:1104.2086arXiv preprintSlav Petrov, Dipanjan Das, and Ryan McDonald. 2011. A universal part-of-speech tagset. arXiv preprint arXiv:1104.2086.
On-line comprehension processes and eye movements in reading. K Rayner, S A Duffy, Reading research: Advances in theory and practice. G. E. MacKinnon M. Daneman and T. G. WallerNew YorkAcademic PressK.; Rayner and S. A. Duffy. 1988. On-line compre- hension processes and eye movements in reading. In G. E. MacKinnon M. Daneman and T. G. Waller, editors, Reading research: Advances in theory and practice, pages 13-66. Academic Press, New York.
Eye movements in reading and information processing: 20 years of research. Keith Rayner, Psychological bulletin. 1243372Keith Rayner. 1998. Eye movements in reading and information processing: 20 years of research. Psy- chological bulletin, 124(3):372.
Can relevance be inferred from eye movements in information retrieval. Jarkko Salojärvi, Ilpo Kojo, Jaana Simola, Samuel Kaski, Proceedings of WSOM. WSOM3Jarkko Salojärvi, Ilpo Kojo, Jaana Simola, and Samuel Kaski. 2003. Can relevance be inferred from eye movements in information retrieval. In Proceedings of WSOM, volume 3, pages 261-266.
Evaluation of a low-cost open-source gaze tracker. Javier San Agustin, Henrik Skovsgaard, Emilie Mollenbach, Maria Barret, Martin Tall, Dan Witzner Hansen, John Paulin Hansen, Proceedings of the 2010 Symposium on Eye-Tracking Research & Applications. the 2010 Symposium on Eye-Tracking Research & ApplicationsACMJavier San Agustin, Henrik Skovsgaard, Emilie Mol- lenbach, Maria Barret, Martin Tall, Dan Witzner Hansen, and John Paulin Hansen. 2010. Evalua- tion of a low-cost open-source gaze tracker. In Pro- ceedings of the 2010 Symposium on Eye-Tracking Research & Applications, pages 77-80. ACM.
Grammar Induction and Parsing with Dependency-and-Boundary Models. Spitkovsky Valentin Ilyich, STANFORD UNIVERSITYPh.D. thesisValentin Ilyich Spitkovsky. 2013. Grammar Induction and Parsing with Dependency-and-Boundary Mod- els. Ph.D. thesis, STANFORD UNIVERSITY.
Token and type constraints for cross-lingual part-of-speech tagging. Oscar Täckström, Dipanjan Das, Slav Petrov, Ryan Mcdonald, Joakim Nivre, TACL. 1Oscar Täckström, Dipanjan Das, Slav Petrov, Ryan McDonald, and Joakim Nivre. 2013. Token and type constraints for cross-lingual part-of-speech tag- ging. TACL, 1:1-12.
Word representations: a simple and general method for semi-supervised learning. Joseph Turian, Lev Ratinov, Yoshua Bengio, ACL. Joseph Turian, Lev Ratinov, and Yoshua Bengio. 2010. Word representations: a simple and general method for semi-supervised learning. In ACL. |
9,919,173 | Resources for Lexicalized Tree Adjoining Grammars and XML encoding: TagML | This work addresses both practical and theorical purposes for the encoding and the exploitation of linguistic resources for feature based Lexicalized Tree Adjoining grammars (LTAG). The main goals of these specifications are the following ones:1. Define a recommendation by the way of an XML(Bray et al., 1998) DTD or schema (Fallside, 2000 for encoding LTAG resources in order to exchange grammars, share tools and compare parsers.2. Exploit XML, its features and the related recommendations for the representation of complex and redundant linguistic structures based on a general methodology.3. Study the resource organisation and the level of generalisation which are relevant for a lexicalized tree grammar. | [
7932716,
11283565,
7591151
] | Resources for Lexicalized Tree Adjoining Grammars and XML encoding: TagML
Patrice Bonhomme bonhomme@loria.fr
DFKI GmbH
Stuhlsatzenhausweg 3 D¢ LORIA BP 239F-54506, 66123Vandoeuvre-lès-Nancy, Saarbrücken
Patrice Lopez lopez@dfki.de
DFKI GmbH
Stuhlsatzenhausweg 3 D¢ LORIA BP 239F-54506, 66123Vandoeuvre-lès-Nancy, Saarbrücken
Resources for Lexicalized Tree Adjoining Grammars and XML encoding: TagML
This work addresses both practical and theorical purposes for the encoding and the exploitation of linguistic resources for feature based Lexicalized Tree Adjoining grammars (LTAG). The main goals of these specifications are the following ones:1. Define a recommendation by the way of an XML(Bray et al., 1998) DTD or schema (Fallside, 2000 for encoding LTAG resources in order to exchange grammars, share tools and compare parsers.2. Exploit XML, its features and the related recommendations for the representation of complex and redundant linguistic structures based on a general methodology.3. Study the resource organisation and the level of generalisation which are relevant for a lexicalized tree grammar.
Introduction
A working group gathering people, mainly from TA-LaNa (University of Paris 7, France), ENST (Paris, France), INRIA (Rocquencourt, France), LORIA (Nancy, France) and DFKI (Saarbrücken, Germany) who are currently working on this formalism, made it necessary to define a shared and common representation of grammars with the aim of exchanging both grammars and associated resources, developing normalised parsers and specifying generic tools. Our proposal, TagML (Tree Adjoining Grammars Markup Language) is a general recommendation for the encoding and exchange of the resources involved in LTAG. This paper presents a model and a syntax to represent, encode and maintain LTAG grammars independently of any development, software and architecture. A significant number of works are based on the TAG (Tree Adjoining Grammar) formalism (Joshi et al., 1975). Still for the moment, none has led to a common representation format of the grammars which would facilitate the exchange of TAG grammars and associated data, as well as for developing normalised parsers and specifying generic tools with a full compatibility. Research and work around the formalism of Lexicalized TAG (LTAG) (Abeillé, 1991) increased during last ten years both for the linguistic point of view and for the computational level. Based on solid mathematical foundations, the linguistic choices associated to the LTAG formalism remain relatively free and contribute to the variety of results and to the important number of developments and applications.
The XTAG system, developed in the early nineties, offers the first workbench dedicated to LTAG grammar design and a Earley-like parser. However, the integrated parser provides only a binary answer (accepted or rejected sentence) hardly compatible with the test of a large grammar. Partial results and diagnostics about errors are necessary to test a grammar and to identify the step involved in the failure of a parse during grammar debugging. Thus, designing a new parser is justified but integrating new components to the XTAG system is technically very difficult for someone that has not been involved in the initial development of the system. More generally, this system has not been developed technically to be distributed since it is based on proper and non specified formats. It requires a narrowly-specialised skill for its installation, its usage and its maintenance.
In this introduction, we describe our approach for the definition of a generic architecture for encoding and managing TAG grammars, the contribution of XML and a global structure for an LTAG grammar. The remainder of this paper is organised as follows. In the section 2, we give an overall view of the TagML architecture and we start with a presentation of the elementary tree encoding principles including a description of the phrase structure components and the feature structures and their place within the TagML architecture. We complete the section with the notion of tree families allowing a meta-description and organisation of elementary trees. In section 3, we tackle the problems connected to the lexicon management and their links with the remainder of the resources. We propose an organisation of these resources within an abstract relational model. Section 4 is concerned with managing of the parsing result and output, which means representing all derived trees (the phrase structures of a sentence) and in parallel all derivation trees (structures closed to semantic dependency tree of a sentence).
Towards a generic architecture
The definition of a generic tool for parsing and managing LTAG grammars supposes a common language specification, shared by the concerned community. The first step toward a more generic and flexible tool undergoes the definition of an appropriate encoding for the management of large-size linguistic resources. This encoding should be able to structure possibly heterogeneous data and give the possibility of representing the inevitable redundancies between lexical data. Consequently, we decided to define TagML as an application of the XML recommendation.
Derived from the SGML, a standard (ISO, 1986) for encoding electronic texts with information about the structural layout and content of the document, the XML recommendation stands out as one of the best encoding schema intended for structuring information and providing interesting possibilities for managing and accessing textual data components. These aspects have been exploited for managing linguistic resources within the Text Encoding Initiative or TEI guidelines (Sperberg-McQueen and Burnard, 1994). The normalisation of resource associated to a LTAG grammar is a necessity first to interchange data between members of the community working on this formalism, then to share tools with the aim to evaluate and compare our results. This normalisation process will offer to the community the opportunity to take benefit of some existing tools (editors, grammar design workbench, tools for testing and comparing different parsers, etc.) and also to exploit reusable software components. Anyone implementing a tool on the basis of the TagML encoding can guarantee its interoperability with existing ones.
The initial motivations for this encoding proposition are mainly centred on the notion of grammar re-usability as well as the software independence and perenniality on the whole. It should be noted that the choices we proposed in this paper are complementary to a set of tools intended to be easily and freely distributed to the community: we could mention an XML parser, graphical editors and a parsing workbench. The developments are based on Java which ensures them reliability and portability.
Why XML for encoding LTAG grammars?
<topic type="cuisine"> <link xlink:type="simple" xlink:href="doc#id(x)" xlink:show="replace"/> <author name="Legros"> <link xlink:type="simple" xlink:href="doc#id(x)" xlink:show="replace"/> </author> </topic> <book id="x"> <title xml:lang="fr">Les nouvelles cuisines</title> </book> Although it is still young, the motivation for using XML for encoding LTAG resources comes from the following properties that appear to be particularly relevant for our needs:
XML is a meta-language for defining markup languages. It provides a common syntax for structuring resources according to their content, meaning, and above all their logical structure. It provides a means to encode and exchange linguistic resources in an independent way, between applications for display, manipulation and processing.
The virtual resources principle (several view and/or level of annotation onto the same data) can be exploited for the management of lexicon by offering different entry points to the same data (see the example on figure 1). For example, one could reverse a morpho-syntactic lexicon designed first for parsing (entries are the inflected forms) to a morpho-syntactic lexicon dedicated for the generation task (entries are the lemma and a set of morphological features). This notion of virtual resources avoid notably the duplication of data at the physical level and make the maintenance of the resources easier.
The consistency of an LTAG grammar is very important for developing a broad-covering grammar which supposes several developers, several lexical components, etc. In our case, the consistency of a grammar is a consequence of the validation of its XML encoding with a specific DTD defined by the concerned research community.
Loading a whole Lexicalized TAG with a system such as XTAG (Doran et al., 1994) is time-costly and resource consuming. In term of implementation, it means that some important efforts have been made to normalise and especially optimise the input reading and access to XML data. An interesting property of XML is that it is no more necessary to load the whole XML encoded lexicon to search for some particular entries. Some normalised software components, as the SAX (Simple API for XML) interface, provides this kind of functionalities in a plain manner.
The requirements in data typing and preprocessing (for example typing in term of left or right auxiliary tree) can be easily solved at two different levels, either at the description level or at the application level. The first level means that we can describe the propriety to test with a restricted DTD. The second level is handled by the XML application and the propriety is tested by the implementation. Both solutions are of course combinable.
Finally the semi-structured data model, underlying to XML, allows the use of extended queries based both on the hierarchical structure and the contents of the resources.
Structure of an LTAG grammar
The exploitation of virtual resources for the encoding of a LTAG grammar is promising but supposes to identify explicitly within a whole LTAG grammar the various resources involved first in the morphological component, in the syntactical lexicon and in the set of elementary trees, but also in the shared forests corresponding to derived and derivation trees resulting from a LTAG parsing.
Writing a global DTD for all these resources supposes to identify the constraints on these different data. The writing of a property DTD allow complementary to exploit the descriptive power of XML to check specific properties and consistency constraints in a LTAG grammar.
The application of an XML encoding can be view as a linguistic engineering work, but the researches needed to define the encoding principles suppose a deep study of the LTAG formalism and its properties. We will see that this study has also opened interesting issues for the parsing point of view. More generally, we think that this work shows the relevance of the XML formalism for the representation of complex heterogeneous data.
Encoding of an elementary tree schema 2.1. Principle
We call elementary tree schema a non-lexicalized elementary tree which is the classical tree used in existing LTAG lexicon to factorize complete elementary tree representations. The term schema can be also used, see (Candito, 1999). In an elementary tree schema, we can distinguish:
The structural part, i.e. a partial phrase structure or a partial parsing tree.
The set of feature equations constraining top and bottom feature structures.
One can note that these two parts present many redundancies in the different elementary trees due to the lexicalization and the extended domain of locality property. We want to be able to encode these redundancies in order to exploit them to improve the parsing process.
Some specifications for the encoding of elementary tree families has been proposed on the basis of the SGML norm in (Issac, 1998). A tree family gathers the elementary tree schemas that can be considered as the syntactic realization of the same predicate-argument schema. This kind of structure for the set of elementary trees is frequent because it makes the development of a grammar easier. Still, by associating a tree family to a lemma, the entry can really anchor only a subset of the elementary tree schemas of this family. This subset can be small for inflected languages as French, Spanish or Korean. The selection is proceeded with filtering features during the lexicalization stage. Such an unification operation is costly while it is possible to indicate statically in the lexicon the exact set of elementary tree schemas that can anchor a precise inflected entry. Our choice is to consider the elementary tree schema description as the document to encode. A tree family is just a particular and optional view on a set of these elementary tree schemas.
An example of the representation of a schema proposed by (Issac, 1998) is given on figure 2. We can note that the encoding of the features is basic and just correspond to introduce common labels for shared feature values. We exploit XML first to encode feature equations without these labels, secondly to avoid redundancies.
We keep from (Issac, 1998) most of the elements involved in the elementary tree schema structure encoding: /* ... */ <n id="n1"> <val>&P</val> <fs type="b"> <f name="num">sing</f> <f name="pers"><l id="f1"/></f> </fs> /* ... */ </n> Figure 2: Node representation in (Issac, 1998) ¢ ¡ ¤ £ ¦ ¥
: elementary tree, document that we specify in this part.
¢ ¡ § ¤ ¥
: general node, the attribute cat gives the category of this node and the attribute type distinguishes foot node, substitution node and anchor.
¢ ¡ ¢ © ¥
: feature structure, of type bottom or top ¢ ¡ © ¥
: typed feature (attribute-value) similarly to the TEI. For typed feature equation, we introduce the element linkGrp specified in the TEI specifications to group internal or external links (element link) and their re-usability.
Structural component
Similarly to (Issac, 1998) proposal, we represent straightforwardly by an isomorphy the tree structure of an elementary tree schema and the XML tree structure (see figure 3).
P 0 N 1 V 2 [ ] 2
.1 <t> <n id="n0" n="0" cat="P"> <n id="n1" n="1" type="subs" cat="N"/> <n id="n2" cat="V" n="2"> <n id="n3" n="3" type="anchor"/> </n> </n> </t> In practice in a broad-covering lexicalized grammar, the redundancy of common substructures is very important. For instance, the subtree dominated by a V category with a depth of 1 (the anchor and the pre-terminal category) is shared by most of the trees describing verbal syntactical context (several hundred of trees for the English XTAG grammar, several thousand for the French LTAG grammar). This redundancy can be very useful to encode for linguistic or efficiency issues. In order to represent these redundancies, we propose to use the XML links and to identify systematically every nodes. We use the principle of virtual resources systematically to obtain only one representation of the different nodes within the whole grammar. Consequently each structure or complete elementary tree is a particular structuring view of these XML documents.
Feature equations
The TEI proposes a recommendation for the encoding of feature structures that we propose to integrate to TagML.
This normalisation allow to type the features and to represent explicitly feature percolation. The features used in the LTAG formalism are only with atomic value thanks to the extended domain of locality principle.
The feature equations of an elementary tree schema can be view as a global term for a complete elementary tree, or as several terms distributed in the various nodes of an elementary tree sharing common variables. We propose to link directly the shared features in order to avoid the necessity to manage shared labels during the parsing of the features structures. These links are specified in linkGrp.
We have the possibility to give a type to a linkGrp, i.e. for a feature equation, for instance subject-verb agreement, then by identifying this linkGrp to share the corresponding feature equation to several elementary tree schemas. If we still consider the example of subject-verb agreement feature equation, the corresponding linkGrp will be shared by all elementary tree schemas that include this kind of agreement. The nodes that carry the features linked by percolation can be identified given the two following ways:
By the definition of global and unique identifiers for the nodes for all the elementary tree schema belonging to the a unique tree family (all the nodes that represent a subject are identified by the same id).
By a special attribute which identify the function of a given node involved in the feature equation. The access to these specific nodes are obtained with the selection language proposed both for XSL Tranformation Language (Clark, 1999) and for the XML pointers called XML Paths (Clark and DeRose, 1999).
As we can see in figure 4, the percolated feature is linked to the linkGrp corresponding to the feature equation, so it is straightforward to access with this link all the other features which shared the same value, without dealing with any labels and table of labels. <n cat= "P " id= "n0 "> <fs type= "top " id= "fs0 "> <f name= "num " id= "f0"> <link xlink:type= "simple " xlink:href= "doc#id(l0) "/> </f> <f name= "det " id= "f1"><minus/></f> </fs> <fs type= "bottom " id= "fs1 "> /* ... */ </fs> </n> /* External document */ <linkGrp type= "accord "> <link targets="
id(n0)/fs[1][@type,top]/f[1][@name,num] id(n2)/fs[1][@type,bottom]/f[1][@name,num]"
id= "l0"/> </linkGrp> /* ... */ (Candito, 1996) and both works are complementary. Such a system can identify the unique function associated to the different nodes of a given elementary tree schema. Since the feature equations are shared and typed , we can apply on them a specific treatment in order to shared computation and consequently decrease significantly the number of unification. This optimisation is important because the worst case complexity of the unification in LTAG is exponential.
Morpho-syntactic lexicon 2.5. Global structure of a TagML document
The TagML figure 6).
<? xml version="1.0" encoding="iso−8859−1"?> <! DOCTYPE tag SYSTEM "tagml.dtd "> <tag xmlns:xlink= "http://www.w3.org/XML/XLink/0.9 "> <desc> This a fake LTAG grammar</desc> <tlist name= "determiner "> <desc> Generic trees for determiners</desc> <t id= "A1_determiner1 " n= "1" name= "determiner "> <desc> Tree description goes here</desc> <sample> A sample</sample> <n> /* ... */</n> </t> <t> <n> /* ... */</n> </t> </tlist> <tlist> /* ... */ </tlist> /* ... */ </tag> Figure 6: Global structure of a TagML document containing a set of generic elementary trees
Tree family
In order to manage efficiently a set of elementary trees that could be quite large, TagML provides a mechanism allowing to gather elementary trees sharing a same subcategorisation frame and corresponding to different syntactic structures. A possibility to describe a tree family (indicated by the tag
¡ ¢ ¤ § ©¨ ¦
) from a set of elementary tree schemas is obtained by defining a set of links to a subset of elementary tree schemas.
The figure 7 presents an example of tree family definition (in this example I1 VTA 0 and I2 VTD 1B refers to two elementary tree schemas for transitive verbs and I2 adjectif6 and I1 adjectif1 to two elementary tree schemas for adjective).
Lexicon
For a Lexicalized Tree Grammar, lexicon and grammar are merged into a syntactic lexicon, but we usually consider three kinds of data bases: <? xml version="1.0" encoding="iso−8859−1"?> <! DOCTYPE tag SYSTEM "tagml.dtd "> <tag xmlns:xlink= "http://www.w3.org/XML/XLink/0.9 "> <desc> Our tree families</desc> <tfamily name= "transitive verb "> <desc> Tree family for transitive verbs</desc> <t xlink:type= "simple " xlink:href= "I1_VTA_0.xml" xlink:show= "replace " xlink:actuate= "auto "/> /* ... */ <t xlink:type= "simple " xlink:href= "I2_VTD_1B.xml " xlink:show= "replace " xlink:actuate= "auto "/> </tfamily> <tfamily name= "adjective "> <desc> Tree family for adjectives</desc> <t xlink:type= "simple " xlink:href= "A1_adjectif1.xml " xlink:show= "replace " xlink:actuate= "auto "/> /* ... */ <t xlink:type= "simple " xlink:href= "I2_adjectif6.xml " xlink:show= "replace " xlink:actuate= "auto "/> </tfamily> /* ... */ </tag> The encoding of the syntactic grammar is more complex that single elementary tree schemas. The role of this lexicon is to link lexical entries to the right set of schemas. Figure 8 proposes an example of a very simple encoding for this lexicon, which only consist in an enumeration of the correct schema for all valid inflected entries. The complexity is the consequence of the fact that many pieces of information are in relation and are distributed in these three kind of data.
Our first attempt to define encoding principles for these lexicon was done directly on the basis of the XML formalism without any special regards on the abstract organisation of the data. This first result was not satisfactory for two main weaknesses: the limited possibility for extending the encoding principles and the limited sharing of distributed <lexicon type= "syntax "> /* ... */ <entry flex= "voile"> <lemma form= "voiler"> <f att= "cat ">V</f> <f att= "num ">sing</f> <f att= "mode ">ind</f> <t xlink:type= "simple " xlink:show= "replace " xlink:href= "l1_VTA_4#id(n0) "/> <t xlink:type= "simple " xlink:show= "replace " xlink:href= "l1_VTC_B#id(n0) "/> /* ... */ </lemma> /* ... */ </entry> /* ... */ </lexicon> Figure 8: A basic encoding of a syntactic lexicon with links to elementary tree schemas resources according to the virtual resource principle. The main reason is that XML does not offer an abstract view on the logical organisation of resources that would allow to define directly general encoding principles. To model these resources and their global organisation we have then used an abstract relational model which allows the representation of each independent resource and its relation to others. This abstract relational model have a direct realisation in XML.
The relation model for LTAG is presented in the next section and should result in an XML DTD for the syntactic grammar level in future work.
The RROM
Our abstract level of representation is called RROM (Relational Resource Organisation Model). A RROM is composed of a set of Resource Entities (RE) and a set of relations between these entities. A RE corresponds to an independent and abstract type of data that is used in a NLP system (for example word, lemma or category). Given a set of resources, Independent data means that this data is not the result of a set of relations between other RE. A RE is represented with a general name and is associated to a data type definition. An instantiation of a RE is a realization of this RE according to the corresponding data type specifications. In following figures, an RE is graphically represented with a square box.
The relations between entities used in this model are characterised by two couples of integers on each edge. Depending on the direction of the relation, this couple gives the arity of the relation with the RE given by the edge, by analogy to the couples on the edges used in relational databases entity/relation models. Two RE can also be in relation. A RROM can be graphically represented with diagrams describing which REs are related to one another. In these diagrams, a Resource Relation (RR) is represented with ellipsis. We distinguish two kinds of edges: A morphological lexicon database, as MULTEXT (Ide and Véronis, 1994), usually associates an inflected word to a set of lemmas and a set of features. Reversible access is needed for generation for example. A lemma is an abstract entity that is represented with a normal form of a word (the entry of a dictionary) and can be realized with all possible flexions of a word. We can distinguish as resources entities inflected words, lemma and morphological features (including a category) that will characterise the in-flection. An inflection is a relation between one inflected word, one lemma and a set of morphological features. Depending on the sense that one follows this inflection relation (from the lemma or from the inflected word), we obtain a reversible access. Each lemma is characterised by a link to one inflected word which is the normal form that identify this lemma (see figure 5). Respectively, an inflected word is not always the normal form of a lemma.
¢ ¡ ¤ £ ¥ ¦
The LTAG syntactic lexicon
The previous RROM model for morphological lexicon is extended to the other resources needed at the syntactic level. An inflection (a lemma and a set of morphological features including verb mode for example) corresponds to a set of schemas. This lexicalization relation can include the instantiation of co-anchors (a lemma and a set of possibly under-specified morphological features) and of some additional syntactic features in the schema. Each syntactical instantiation give a complete elementary tree. If we assume that linguistic principles given in (Abeillé et al., 1990) and (Candito, 1999) are fulfilled by the grammar, each syntactical instantiation corresponds to only one semantic instantiation (semantic consistency principle). This model allows an incremental view of the lexicon resources that could be extended easily.
The figure 9 presents the corresponding RROM. To simplify, tree families and structuration of features are not in-cluded in this example.
Such an approach based on a relational model to define XML encoding has also been used for the encoding of multilevel annotated textual corpus (Lopez and Romary, May 2000).
Parsing forest
Principle
The result of a parsing based on a LTAG is two packed representations called shared forests representing respectively all derived trees and all derivation trees. The representation of such a forest with XML is possible by using XML links. The resulting structure is equivalent to an acyclic graph representation. Maintaining this kind a shared structure allows a more compact representation according to the size of the data but also more efficient and useful for sharing additional semantic processing.
Derived tree forest
A derived shared forest is an element corresponding to a tag ¢ ¡¡£ ¥ ¤
. The trees are then expressed similarly as an elementary tree schema. The nodes can contain only one feature structure center resulting from the unification of top and bottom feature structures, or the two non-unified feature structures if we consider a partial derived tree.
Derivation tree forest
We consider here two kinds of nodes (corresponding to a element with the tag § ¦ ¤ ) for the derivation trees (corresponding to a tag © ¡£ ¤ ): node for initial tree (the value of the attribute type is i) and node for auxiliary tree (the value of the attribute type is a). For such a node representing a given elementary tree, an additional attribute also represents the Gorn address of the node where the attachment has been realized with the father tree, the name of the elementary tree schema and the lexical string anchoring the tree.
Conclusion
We have presented in this paper the first specifications of a general encoding of the various linguistic resources involved in the LTAG formalism called TagML. This work can be view as a generalisation and a normalisation of the XTAG format. It includes first a complete specification for the encoding of elementary tree schemas:
Used in an implemented graphical workbench for LTAG.
Associated to a XSL style sheet in order to produce L A T E X documentation (on the basis of the pstricks package).
We have also proposed some high level specifications for the lexicon based on a relation model called RROM and a straightforward extension to the encoding of results (derivation and derived forests). The lexicalization of the formalism and the complex distribution of the resources in several knowledge sources raise several problems if we want to capture sharing properties. Considering these difficulties, the XML encoding formalism is powerful and relevant to represent complex heterogeneous linguistic resources. Future works on TagML will complete the encoding specification of the lexical components.
Parallel works (Lopez and Romary, May 2000) focus on the efficiency of XML-based processing, including an efficient internal representations directly deduced from XML documents and based on Finite State Techniques. Applied to TagML, our ambition is then to provide a complete and efficient LTAG resource management system based on an XML architecture. We welcome all contributions to the current undergoing development of the TagML specification and we hope that it will appear enough promising to give rise to interests and possible contributions from the whole LTAG community.
Figure 1 :
1Basic example of virtual resources and reusability
Figure 3 :
3Isomorphy between elementary tree schema and XML tree structure
Figure 4 :
4Shared features and factorisation of common feature equation These identifications of nodes are fully compatible with the automatic generation system of elementary tree schemas of
Figure 5 :
5RROM for morphological lexicon.
Figure 7 :
7Sample of a TagML document and two tree families a set of elementary tree schemas
Figure 9 :
9Simplified RROM for LTAG resources.
Tree AdjoiningGrammars and Related Frameworks (TAG+4).A. Joshi, L. Levi, and M. Takahashi. 1975
A Lexicalized Tree Adjoining Grammar for English. Anne Abeillé, Kathleen M Bishop, Sharon Cote, Yves Schabes, MS-CIS-90-24Departement of Computer and Information Science, University of PennsylvaniaTechnical ReportAnne Abeillé, Kathleen M. Bishop, Sharon Cote, and Yves Schabes. 1990. A Lexicalized Tree Adjoining Grammar for English. Technical Report MS-CIS-90-24, Departe- ment of Computer and Information Science, University of Pennsylvania.
Une grammaire lexicalisée d'arbres adjoints pour le français. Anne Abeillé, Université Paris 7Ph.D. thesisAnne Abeillé. 1991. Une grammaire lexicalisée d'arbres adjoints pour le français. Ph.D. thesis, Université Paris 7.
Tim Bray, Jean Paoli, C M Sperberg-Mcqueen, Extensible Markup Language (XML) 1.0. W3C. 10February. W3C RecommendationTim Bray, Jean Paoli, and C. M. Sperberg-McQueen, 1998. Extensible Markup Language (XML) 1.0. W3C, http://www.w3.org/TR/REC-xml, Febru- ary. W3C Recommendation 10-February-1998.
A principle-based hierarchical representation of LTAGs. Marie-Hélène Candito, COLING'96. Copenhagen, DenmarkMarie-Hélène Candito. 1996. A principle-based hierarchi- cal representation of LTAGs. In COLING'96, Copen- hagen, Denmark.
Structuration d'une grammaire LTAG : application au français età l'italien. Marie-Hélène Candito, University of Paris 7Ph.D. thesisMarie-Hélène Candito. 1999. Structuration d'une gram- maire LTAG : application au français età l'italien. Ph.D. thesis, University of Paris 7.
XML Path Language (XPath) Version. James Clark, Steve Derose, 1.0. W3CJames Clark and Steve DeRose, 1999. XML Path Language (XPath) Version 1.0. W3C, http://www.w3.org/- TR/xpath, November. W3C Recommendation 16 November 1999.
XSL Transformations (XSLT) Version. James Clark, 1.0. W3CJames Clark, 1999. XSL Transformations (XSLT) Ver- sion 1.0. W3C, http://www.w3.org/TR/xslt, November. W3C Recommendation 16 November 1999.
Beth Ann Hockey, B. Srinivas, and Martin Zaidel. 1994. XTAG System -A Wide Coverage Grammar for English. Christy Doran, Dania Egedi, COLING. Kyoto, JapanChristy Doran, Dania Egedi, Beth Ann Hockey, B. Srinivas, and Martin Zaidel. 1994. XTAG System -A Wide Cov- erage Grammar for English. In COLING, Kyoto, Japan.
C David, Fallside, XML Schema Part 0: Primer. W3C. David C. Fallside, 2000. XML Schema Part 0: Primer. W3C, http://www.w3.org/TR/xmlschema-0, April. W3C Working Draft, 7 April 2000.
Multext (multilingual tools and corpora). Nancy Ide, Jean Véronis, 14th Conference on Computational Linguistics (COLING'94). Kyoto, JapanNancy Ide and Jean Véronis. 1994. Multext (multilingual tools and corpora). In 14th Conference on Computa- tional Linguistics (COLING'94), Kyoto, Japan.
Information Processing, Text and Office Systems, Standard Generalized Markup Language (SGML). ISO. ISO, 1986. Information Processing, Text and Office Sys- tems, Standard Generalized Markup Language (SGML)
First edition, 1986-10-15. International Organization for Standardization. = Traitement de l'information, systemes bureautiques, langage standard gnralis de balisage (SGML). Geneva, SwitzerlandInternational Standard ISO 8879-1986. Federal information processing standard; FIPS PUB 152= Traitement de l'information, systemes bureautiques, langage standard gnralis de balisage (SGML). First edi- tion, 1986-10-15. International Organization for Stan- dardization, Geneva, Switzerland. International Stan- dard ISO 8879-1986. Federal information processing standard; FIPS PUB 152.
A Standard Representation Framework for TAG. Fabrice Issac , Fourth International Workshop on. Fabrice Issac. 1998. A Standard Representation Frame- work for TAG. In Fourth International Workshop on |
18,454,652 | A Two-stage Model for Content Determination | In this paper we describe a two-stage model for content determination in systems that summarise time series data. The first stage involves building a qualitative overview of the data set, and the second involves using this overview, together with the actual data, to produce summaries of the timeseries data. This model is based on our observations of how human experts summarise time-series data. | [
6375093,
591655
] | A Two-stage Model for Content Determination
Somayajulu G Sripada ssripada@csd.abdn.ac.uk
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Aberdeen, Aberdeen, Aberdeen, AberdeenUK, UK, UK, UK
Ehud Reiter ereiter@csd.abdn.ac.uk
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Aberdeen, Aberdeen, Aberdeen, AberdeenUK, UK, UK, UK
Jim Hunter jhunter@csd.abdn.ac.uk
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Aberdeen, Aberdeen, Aberdeen, AberdeenUK, UK, UK, UK
Jin Yu
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Dept. of Comp. Sc. Univ. of Aberdeen
Aberdeen, Aberdeen, Aberdeen, AberdeenUK, UK, UK, UK
A Two-stage Model for Content Determination
In this paper we describe a two-stage model for content determination in systems that summarise time series data. The first stage involves building a qualitative overview of the data set, and the second involves using this overview, together with the actual data, to produce summaries of the timeseries data. This model is based on our observations of how human experts summarise time-series data.
Introduction
This paper addresses the problem of content determination in data summarisation. Content determination as the name indicates is the process responsible for determining the content of the texts generated by an NLG system (Reiter and Dale 2000).
Although contentdetermination is probably the most important part of an NLG system from the end-user's perspective, there is little agreement in the NLG community as to how content-determination should be done, with different systems adapting widely varying approaches. Also, algorithms and architectures for content-determination seem to often be based on the intuitions of system developers, instead of on empirical observations, although detailed content determination rules are often based on corpus analysis and interaction with experts.
In this paper we propose a general architecture for content determination in data summarisation systems which assumes that content determination happens in two stages: first a qualitative overview of the data is formed, and second the content of the actual summaries is decided upon. This model is based on extensive knowledge acquisition (KA) activies that we have carried out in the SUMTIME project (Sripada, 2001), and also matches observations made during KA activities carried out in the STOP project . We have not yet implemented this model, and indeed one of the issues that we need to think about is to what degree a content-determination strategy used by human experts is also an appropriate one for a computer NLG system.
Content Determination
Content determination is the task of deciding on the information content of a generated text. In the three-stage pipeline model of Reiter and Dale (2000), content determination is part of the first stage, document planning, along with document structuring (determining the textual and rhetorical structure of a text). Content determination is extremely important to end users; in most applications users probably prefer a text which poorly expresses appropriate content to a text which nicely expresses inappropriate content.
From a theoretical perspective content determination should probably be based on deep reasoning about the system's communicative goal, the user's intentions, and the current context (Allen and Perrault 1980), but this requires an enormous amount of knowledge and reasoning, and is difficult to do robustly in real applications.
In recent years many new content determination strategies have been proposed, ranging from the use of sophisticated signal-processing techniques (Boyd 1997) to complex planning algorithms (Mittal et al 1998) to systems which exploit cognitive models of the user (Fiedler 1998). However, most of these strategies have only been demonstrated in one application. Furthermore, as far as we can tell these strategies are usually based on the intuition and experiences of the developers. While realisation, microplanning, and document structuring techniques are increasingly based on analyses of how humans perform these tasks (including corpus analysis, psycholinguistic studies, and KA activities), most papers on content determination make little reference to how human experts determine the content of a text. Human experts are often consulted with regard to the details of content rules, especially when schemas are used for content determination (Goldberg et al 1994, McKeown et al 1994; but they rarely seem to be consulted (as far as we can tell) when deciding on the general algorithm or strategy to use for content determination.
Summarising Time-Series Data
Text summaries of Time-Series Data
Time-series data is a collection of values of a set of parameters over time. Such data is very common in the modern world, with its proliferation of databases and sensors, and humans frequently need to examine and make inferences from time-series data.
Currently, human examination of time-series data is generally done either by direct inspection of the data (for small data sets), by graphical visualisation, or by statistical analyses. However, in some cases textual summaries of time-series data are also useful. For example, newspapers regularly publish textual summaries of weather predictions, the results of polls and surveys, and stock market activity, instead of just showing numbers and graphs. This may be because graphical depictions of time-series data require time and skill to interpret, which is not always available. A doctor rushing to the side of a patient who is suffering from a heart attack, for example, may not have time to examine a set of graphs of time-series data, and a newspaper reader may not have the statistical knowledge necessary to interpret raw poll results.
Perhaps the major problem today with textual descriptions of time-series data is that they must be produced manually, which makes them expensive and also means they can not be produced instantly. Graphical depictions of data, in contrast, can be produced quickly and cheaply using off-the-shelf computer software; this may be one reason why they are so popular. If textual summaries of time-series data could be automatically produced by software as cheaply and as quickly as graphical depictions, then they might be more widely used.
SUMTIME
The goal of the SUMTIME project is to develop better techniques for automatically generating textual summaries of time-series data, in part by integrating leading-edge NLG and time-series analysis technology. We are currently focusing on two domains:
Meteorology -producing weather forecasts from numerical weather simulations. This work is done in collaboration with Weather News Inc (WNI)/Oceanroutes, a leading meteorological company. Gas Turbines -summarising sensor readings from a gas turbine. This work is done in collaboration with Intelligent Applications, a leading developer of monitoring software for gas turbines.
These domains are quite different in time-series terms, not least in the size of the data set. A typical weather forecast is based on tens of values for tens of parameters, while a summary of gas-turbine sensor readings may be based on tens of thousands of values for hundreds of parameters. We hope that looking at such different domains will help ensure that our results are generalisable and not domainspecific. We will start working on a third domain in 2002; this is likely to be a medical one, perhaps (although this is not definite) summarising sensor readings in neonatal intensive care units.
The first year of SUMTIME (which started in April 2000) has mostly been devoted to knowledge acquisition, that is to trying to understand how human experts summarise timeseries data. This was done using various techniques, including corpus analysis, observation of experts writing texts, analysis of content rules suggested by experts, discussion with experts, and think-aloud sessions, where experts 'think aloud' while writing texts (Sripada, 2001).
Example
The following table shows an example segment of meteorological time series data, specifically predicted wind speed and wind direction at an offshore oil exploration site. The time field is shown in 'day/hour' format. The above example is just a sample showing the data and its corresponding forecast text for the wind subsystem. Real weather forecast reports are much longer and are produced from data involving many more weather parameters than just wind speed and wind direction.
Human Summarisation
Meteorology
In the domain of weather forecasting, we observed how human experts carry out the task of summarising weather data by video recording a meteorologist thinking aloud while writing weather forecasts. Details of the KA have been described in Sripada (2001). Our observations included the following:
1. In the case of weather forecasts, time-series data represent the values of important weather parameters (wind speed, direction, temperature, rainfall), which collectively describe a single system, the weather. It seemed as though the expert was constructing a mental picture of their source using the significant patterns in time series. Thus the first activity is that of data interpretation to obtain a mental model of weather.
2. The mental model of the weather is mostly in terms of the elements/objects related to atmosphere, like cold fronts and warm fronts; it also seems to be qualitative instead of numerical. In other words, it qualitatively describes the meteorological state of the atmosphere. The expert calls this an 'overview of the weather'.
3. Building the overview involves the task of interpretation of the time series weather data. While interpreting this data the expert used his meteorological knowledge (which includes his personal experience in interpreting weather data) to arrive at an overview of the weather. During this phase, he appeared to be unconcerned about the end user of the overview (see 4.1.1 below). We call this process Domain Problem Solving (DPS) where information is processed using exclusively the domain knowledge.
4. Forecasts are written after the forecaster gets a clear mental picture (overview) of the weather. Building the overview from the data is an objective process which does not depend on the forecast client (user), whereas writing the forecast is subjective and varies with client.
Examples
Two examples of the influence of the overview on wind texts (Section 3.3) are:
1. When very cold air flows over a warm sea, surface winds may be underestimated by the numerical weather model. In such cases the forecaster uses his 'overview of the weather' to increase wind speeds and also perhaps add other instability features to the forecast such as squalls.
2. If the data contains an outlier, such as a wind direction which is always N except for one time period in which it is NE, then the expert uses the overview to decide if the outlier is meteorologically plausible and hence should be reported or if it is likely to be an artefact of the simulation and hence should not be reported.
The above examples involve reasoning about the weather system. Forecasters also consider user goals and tasks, but this may be less affected by the overview. For example, in one think-aloud session, the forecaster decided to use the phrase 20-24 to describe wind speed when the data file predicted a wind speed of 19kt. He explained to us that he did this because he knew that oil-rig staff used different operational procedures (for example for docking supply boats) when the wind exceeded 20kt, and he also knew that even if the average wind speed in the period was 19kt, the actual speed was going to vary minute by minute and often be above 20kt. Hence he decided to send a clear signal to the rig staff that they should expect to use '20kt or higher' procedures, by predicting a wind speed of 20-24. This reasoning about the user took place after the overview had been created, and did not seem to involve the overview.
Gas Turbine Sensors
Unlike the previous domain, in the domain of gas turbine (GT), currently there are no textual summaries of turbine data written by humans.
Thus we have asked the domain experts to comment orally on the data. However, the experts have attempted to summarise their comments at the end of each session if they found something worth summarising. Our observations included:
1. The main task is to identify the abnormal data and summarise it. However, an abnormal trend in a specific channel might have been caused due to a change in another channel (for instance, an increase in the output voltage can be explained with a corresponding increase in the fuel input). Thus individual channel data needs to be interpreted in the context of the other channels.
2. The expert agrees during the KA session that he first analyses the data numerically to obtain qualitative trends relating to the GT before generating comments. Therefore the state of the GT that produced the data is constructed through data interpretation and the knowledge of the state is then used to check if the turbine is in a healthy state or not. Since GT is an artefact created by humans it is possible to have a fairly accurate model of states of a GT (unlike weather!).
3. The phrases used by the expert often express the trends in the data as if they were physical happenings on the turbine, like "running down" for a decreasing trend in shaft speed data. This indicates that the expert is merely expressing the state of the GT. This in turn indicates that at the time the summarisation is done, the mental model of the state of the GT is available.
Evidence from Other Projects
After making the above observations, we examined think-aloud transcripts from an earlier project at the University of Aberdeen, STOP , which involved building an NLG system that produced smoking-cessation letters from smoking questionnaires. These transcripts (from think-aloud sessions of doctors and other health professionals manually writing smoking-cessation letters) showed that in this domain as well experts would usually first build an overview (in this case, of the smoker) before starting to determine the detailed content of a letter. Below is an excerpt from one of the transcripts of a KA session : « …. The first thing I have got to do is to read through the questionaire just to get some idea of where he is at with his smoking. …… » We did not investigate overview formation in any detail in STOP, but the issue did come up once in a general discussion with a doctor about the think-aloud process. This particular doctor said that he built in his mind a mental image of the smoker (including a guess at what he or she looked like), and that he found this image very useful in deciding how best to communicate with the smoker.
In another work, RajuGuide, once again there is evidence of an overview influencing content determination (Sripada 1997). RajuGuide is a system that generates route descriptions. At a higer level of abstraction, RajuGuide has two parts. The first part is responsible for planning the route the user wanted. The second module is responsible for generating the text describing the route. The route computed by the first part, which is in the form of a series of coordinates, is not directly communicated to the user. Instead the second part attempts to enrich the route depending upon what the user already knows and what additional information the knowledge base knows for that particular route. We believe that the route computed by the route planner is the overview in this case and it drives the content determination process in the second part.
Two-stage Model for content determination
These observations have led us to make the following hypotheses:
1. Humans form a qualitative overview of the input data set. 2. Not all the information in the overview is used in the text. 3. The overview is not dependent on pragmatic factors such as the user's taste, these are considered at a later stage of the content determination process.
Based on the above hypotheses, we propose a two-stage model for content determination as depicted in Figure 1. It is assumed that Domain Data Source (DDS) is external to the text generator. It has been assumed that a Domain Problem Solver or Domain Reasoner (DR) is available for data processing. This reasoning module is essentially useful to draw inferences while interpreting the input data set and ultimately is responsible for generating the overview. Communication Goal (CG) is the input to the data summarisation system in response to which it accesses DDS to produce an overview of the data using the DR. In the context of the overview produced by DR, the Communication Reasoner (CR) system generates the final content specification taking into account the influence of the User Constraints (UC) and other pragmatic factors. This content is then sent to subsequent NLG modules (not shown), such as microplanning and surface realisation.
Our model has some similarities to the one proposed by Barzilay et al (1998), in that the Domain Reasoner uses general domain knowledge similar to their RDK, while the Communication Reasoner uses communication knowledge similar to their CDK and DCK.
The central feature of the above model is the idea of data overview and its effect on content selection. One possible use of overviews is to trigger context-dependent content rules. The time-series analysis part of SUMTIME is largely based on Shahar's model (1997), which makes heavy use of such rules. In Shahar's model contexts are inferred by separate mechanisms; we believe that these should be incorporated into the overview, but this needs further investigation.
At the current stage of our project we have only a gross idea of what makes up the proposed data overview. Our suspicion is that it is hard to make a generic definition of the data overview for all domains. Instead, we would like to imagine the data overview as the result of inferences made from the input data so as to help in triggering the right content determination rules. For example, in out meteorology domain, the input time-series data comes from a numerical weather prediction (NWP) model, but even the most sophisticated NWP models do not fully represent the real atmosphere -all models work with approximations. Thus the NWP data displayed to the meteorologist is interpreted by him to arrive at a conceptual model in his or her head, which is the overview.
Issues with the two stage model
There are a number of issues that need to be resolved with respect to the two-stage model described above.
7.1
Is overview creation a human artefact?
The main basis for including the overview in two stage model has been the observation made during the think aloud sessions that experts form overviews before writing texts. Now it can be argued that even if humans need an overview, computer programs may not. Evidently, it is hard to ever prove the contrary. But what can be done is to show the advantages gained by a computer program by using an overview for content selection.
7.2
Does the overview have any other utility than just providing context for content determination rules?
We believe that the overview can play multiple roles in the overall process of writing textual forecasts. First, the overview can bring in additional information into the text that is not directly present in the underlying raw data. In Reiter and Dale's (2000) terminology, overviews are a technique for generating Computable Data content, that is content which is not directly present in the input data but can be computed or inferred from it. Such content provides much of the value of summary texts. Indeed, one could argue that simple textual descriptions of a set of data values without extra computed or inferred content, such as those produced by TREND (Boyd, 1997), might not be that much more useful than a graph of the data.
The overview may also help in deciding how reliable the input data is, which is especially important in the meteorology domain, since the data comes from an NWP simulation. This could, for example, help the generation system decide whether to use precise temporal terms such as Midnight or vague temporal terms such as tonight. Again one could argue that the ability to convey such uncertainty and reliability information to a non-specialist is a key advantage of textual summaries over graphs.
In general, the overview allows reasoning to be carried out on the raw data and this will probably be useful in many ways.
7.3
How is the overview related to the domain ontology?
The basic concepts present in an overview may be quite different from the basic concepts present in a written text. For example, the overview built by our expert meteorologist was based on concepts such as lapse rate (the rate at which temperature varies with height), movement of air masses, and atmospheric stability.
However, the texts he wrote mentioned none of these, instead it talked about wind speed, wind direction, and showers. In the STOP domain, overviews created by doctors seemed to often contain many qualitative psychological attributes (depression, selfconfidence, motivation to quit, etc) which were not explicitly mentioned in the actual texts written by the doctors.
This suggests that the conceptual ontology, that is the specification of underlying concepts, underlying the overview may be quite different from the ontology underlying the actual texts. The overview ontology includes concepts used by experts when reasoning about a domain (such as air masses or motivation), while the text ontology includes concepts useful for communicating information to the end user (such as wind speed, or longer life expectancy).
7.4
What do experts think about the twostage model?
When the two stage model was reported back to a WNI expert who participated in a think-aloud session, the expert agreed that he does build an overview (as he did during the KA session) while writing forecasts, but felt that it's use may not be necessary for writing all forecasts. In his opinion, the interpretation of most data sets doesn't require the use of the overview. However, he was quick to add that the quality of the forecasts can be improved by using overviews which faciliate reasoning with the weather data.
Evaluation
We are currently building a testbed system called SUMTIME-MOUSAM which will enable us to test the hypotheses we have presented in this paper and other hypotheses suggested by our KA activities. SUMTIME-MOUSAM is a framework system that consists of • "Infrastructure" software for accessing data files, regression testing of new software versions, etc. • An ontology, which defines a conceptual level of representation of texts. • A corpus of human-written texts with their corresponding conceptual representations defined using the above ontology. • Scoring software which compares the output of a module (either at a conceptual or text level) against the human corpus.
Because we are primarily interested in content issues, it is important to evaluate our system at a content level as well as at a text level. To support this, we are developing conceptual representations of the texts we will be generating, which can also be extracted from human texts by manual analysis. SUMTIME-MOUSAM is currently best developed in the area of producing wind texts. In this area, we have developed a conceptual representation and manual annotation guide (with good interannotator agreement, generally kappa values of .9 or higher); built an initial software system to automatically produce such texts based on a threshold model without an overview; and begun the process of analysing differences. We are currently working on extending SUMTIME-MOUSAM to other parts of weather forecasts, such as statements describing clouds and precipitation, and plan in the future to extend it to the gas-turbine domain.
With regard to testing hypotheses specifically about two-stage content determination (the subject of this paper), our plan is as follows 1. Compare the output of the non-overview based software to human summary texts, and identify cases where an overview seems to be used.
3. Attempt to automatically generate the overview from the data, and again compare the resultant texts to human texts.
At some point towards the end of SUMTIME, we also hope to conduct user task evaluations. For example, we may show gas-turbine engineers our summary texts and see if this helps them detect problems in the gas turbine.
Conclusion
Our experience in three domains shows that human experts build qualitative overviews when writing texts, and that these overviews are used by the experts for inference and to provide a context for specific content rules. We believe that overviews could also be very useful in computer NLG systems, and are currently working on testing this hypothesis, as part of the SUMTIME project.
. Ask human experts to build an overview (using a GUI), modify our software to use this overview when generating texts, and see if this results in texts more similar to the human texts.
AcknowledgementsMany thanks to our collaborators at WNI/Oceanroutes and Intelligent Applications, especially Ian Davy, Dave Selway, Rob Milne, and Jon Aylett; this work would not be possible without them! Thanks also to Sandra Williams and the anonymous reviewers for their comments on a draft of this paper. This project is supported by the UK Engineering and Physical Sciences Research Council (EPSRC), under grant GR/M76881.
Analyzing Intention in Utterances. J Allen, C R Perrault, Artificial Intelligence. 26Allen J. and Perrault C. R. (1980). Analyzing Intention in Utterances. Artificial Intelligence, 26:1-33.
A New Approach to Expert System Explanations. R Barzilay, D Mccullough, O Rambow, J Dechristofaro, T Korelsky, B Lavoie, Proceedings of INLG-1998. INLG-1998Barzilay R, McCullough D, Rambow O, DeChristofaro J, Korelsky T, and Lavoie B (1998) A New Approach to Expert System Explanations, In Proceedings of INLG-1998, pages 78-87.
Detecting and Describing Patterns in Time-varying Data Using Wavelets. S Boyd, Advances in Intelligent Data Analysis: Reasoning About Data. X Lui and P CohenSpringer VerlagLecture Notes in Computer Science 1280Boyd S (1997). Detecting and Describing Patterns in Time-varying Data Using Wavelets. In Advances in Intelligent Data Analysis: Reasoning About Data, X Lui and P Cohen (Eds.), Lecture Notes in Computer Science 1280, Springer Verlag.
Macroplanning with a Cognitive Architecture for the Adaptive Explanation of Proofs. A Fiedler, Proceedings of INLG-1998. INLG-1998Fiedler A (1998). Macroplanning with a Cognitive Architecture for the Adaptive Explanation of Proofs. In Proceedings of INLG-1998, pp 88-97.
Using Natural-Language Processing to Produce Weather Forecasts. E Goldberg, N Driedger, R L Kittredge, IEEE Expert. 92Goldberg E, N Driedger and RL Kittredge (1994), Using Natural-Language Processing to Produce Weather Forecasts, IEEE Expert, 9, 2, pp 45-53.
Practical Issues in Automatic Document Generation. K Mckeown, K Kukich, J Shaw, Proceedings of ANLP-1994. ANLP-1994McKeown K, Kukich K, Shaw J (1994). Practical Issues in Automatic Document Generation. In Proceedings of ANLP-1994, pp 7-14.
Describing Complex Charts in Natural Language: A Caption Generation System. V Mittal, J Moore, G Carenini, Roth S , Computational Linguistics. 24Mittal V, Moore J, Carenini G, and Roth S (1998). Describing Complex Charts in Natural Language: A Caption Generation System. Computational Linguistics 24: 431-467.
Building Natural Language Generation Systems. E Reiter, R Dale, Cambridge University PressReiter E. and Dale R. (2000) Building Natural Language Generation Systems. Cambridge University Press.
Knowledge Acquisition for Natural Language Generation. E Reiter, R Robertson, L Osman, Proceedings of the First International Conference on Natural Language Generation. the First International Conference on Natural Language GenerationINLG-2000Reiter E., Robertson R. and Osman L. (2000) Knowledge Acquisition for Natural Language Generation. In Proceedings of the First International Conference on Natural Language Generation (INLG-2000), 217-224 pp.
Framework for Knowledge-Based Temporal Abstraction. Y Shahar, Artificial Intelligence. 90Shahar Y (1997), "Framework for Knowledge-Based Temporal Abstraction", Artificial Intelligence 90:79-133..
Communicating Plans in Natural Language: Planning and Realisation. S G Sripada, Madras, IndiaIndian Institute of TechnologyPhD ThesisSripada S. G. (1997) Communicating Plans in Natural Language: Planning and Realisation. PhD Thesis, Indian Institute of Technology, Madras, India.
SUMTIME: Observations from KA for Weather Domain. S G Sripada, Aberdeen AB24 3UE, UKComputing Science Dept. Univ of AberdeenTechnical ReportAwaiting approval from industrial collaboratorsSripada S. G. (2001) SUMTIME: Observations from KA for Weather Domain. Technical Report, Computing Science Dept. Univ of Aberdeen, Aberdeen AB24 3UE, UK. Awaiting approval from industrial collaborators. |
31,929,572 | Information Extraction for Academic Conference and It's Application | AbstractInternet has become a major channel for academic information dissemination in recent years. As a matter of fact, academic information, e.g., "call for papers", "call for proposals", "advances of research", etc., is crucial for researchers, since they have to publish research outputs and capture new research trends. This study focuses on extraction of academic conference information including topics, temporal information, spatial information, etc. Hope to reduce overhead of searching and managing conference information for researchers and improve * 國立臺灣大學圖書資訊學系 陳光華 efficiency of publication of research outputs. An automatic procedure for conference information retrieval and extraction is proposed firstly. A sequence of experiments is carried out. The experimental results show the feasibility of the proposed procedure. The F1 measure for text classification is over 80%; F1 measure and Recall for extraction of named entities are over 86% and 70%, respectively. A system platform for academic conference information retrieval and extraction is implemented to demonstrate the practicality. This system features functionalities of document retrieval, named entities extraction, faceted browsing, and calendar with a fusion of academic activities and daily life for researchers. | [] | Information Extraction for Academic Conference and It's Application
September/December 2010
陳光華 khchen@ntu.edu.tw
Department of Library and Information Science
National Taiwan University
Kuang-hua Chen
Information Extraction for Academic Conference and It's Application
Computational Linguistics and Chinese Language Processing
153-4237September/December 2010學術會議資訊之擷取及其應用
AbstractInternet has become a major channel for academic information dissemination in recent years. As a matter of fact, academic information, e.g., "call for papers", "call for proposals", "advances of research", etc., is crucial for researchers, since they have to publish research outputs and capture new research trends. This study focuses on extraction of academic conference information including topics, temporal information, spatial information, etc. Hope to reduce overhead of searching and managing conference information for researchers and improve * 國立臺灣大學圖書資訊學系 陳光華 efficiency of publication of research outputs. An automatic procedure for conference information retrieval and extraction is proposed firstly. A sequence of experiments is carried out. The experimental results show the feasibility of the proposed procedure. The F1 measure for text classification is over 80%; F1 measure and Recall for extraction of named entities are over 86% and 70%, respectively. A system platform for academic conference information retrieval and extraction is implemented to demonstrate the practicality. This system features functionalities of document retrieval, named entities extraction, faceted browsing, and calendar with a fusion of academic activities and daily life for researchers.
acting actuarial science adapted physical education admiralty law advertising aerobiology aeronautical engineering aerospace engineering aesthetics affine geometry african studies agricultural economics agricultural education agricultural engineering agrology agronomy air force studies algebraic computation algebraic geometry algebraic number theory algebraic topology american history american politics american studies analytical chemistry ancient egyptian religion ancient history animal communications animal science animation anthropology of technology apiculture appalachian studies applied psychology approximation theory aquaculture architectural engineering archival science art education art history artillery arts administration asian american studies asian studies associative algebra astrobiology astronomy astrophysics atheism and humanism atomic, molecular, and optical physics australian literature automotive systems engineering beekeeping behavioral geography behavioural economics behavioural science bilingual education biochemistry bioeconomics biogeography bioinformatics biological psychology biology biomechanical engineering biomedical engineering biophysics black studies or african american studies botany business administration business english business ethics calligraphy campaigning canadian literature canadian studies canon law cardiology cardiothoracic surgery cartography category theory cell biology celtic studies chamber music chemical engineering cheminformatics chemistry education chicano studies child welfare children geographies chinese history chinese studies or sinology choreography christianity chronobiology church music civics civil procedure classical archaeology classics climatology coastal geography cognitive behavioral therapy cognitive psychology cognitive science collective behavior combat engineering communication design communication engineering
1 .
1緒論 在全球化的趨勢之下,大學的學術評價更加受到前所未有的重視,有各式各樣以全球大 學為標的之學術評鑑報告陸續公告周知,如上海交通大學(ARWU, 2010)與英國 Quacquarelli Symonds (QS, 2010)所做的世界大學排名。此外,Thomson Reuters 公司的 SCI、SSCI、A&HCI 等資料庫,以及 Journal Citation Report(JCR),提供的統計數據, Call For Paper 有時間的期限,而期刊 Special Issue 的 Call For 目標式網頁擷取(focused crawling)是一種蒐集研討會通知資訊的方式。有別於一 般 Web Crawler 漫無目的地抓取所有的網頁,Focused Crawling 會先過濾與主題無關的內 容,也就是會應用一組特定主題的關鍵詞,用以訓練並建立文件分類機制,再由此分類 機制引導 crawler 擷取與主題相關的網頁。(Chakrabarti, van den Berg, & Dom, 1999)另 外還可以將 Focused Crawling 稍加變化,依據一組系統已經記載的研討會議網站清單, 反 向地 蒐集相 關網 頁文件 ,這 種網頁 資料 蒐集的 替代 方案被 稱為 反向式 網頁 擷取 (backward crawling)。(Brennhaug, 2005)這種網頁蒐集機制首先以主題關鍵字,透過 搜尋引擎取得相關網頁的網址及網頁內容,以建構候選相關文件集。再接續利用搜尋引 擎的反向連結查詢功能(back link query),一併蒐集連結到候選文件的網頁。又考量到 這種由反向連查詢所得的網頁也有可能再連結到其他研討會議網頁,所以再繼續以正向 連結(forward crawling)擷取該網頁中的其他 URL,以發掘潛在的相關網頁。此程序將 會一直重覆執行直到重覆的次數達到預設的門檻。 雖然具名實體辨識的研究很早就開始了,但是學術會議資訊擷取的研究則是比較不 受到許多研究者的關注。Lazarinis(1998)提出應該應用資訊擷取技術進行論文徵稿通 告 (call for paper,簡稱 CFP) 的檢索,有別於傳統上僅以文件檢索技術檢索 CFP。Lazarinis 發現這種作法在固定 Recall 的情形下,可以提昇 45%-60%的 Precision,這項研究確認應 將學術會議資訊的檢索,視為資訊擷取的問題,而非單純的文件檢索的問題。 Schneider(2005)應用 Conditional Random Fields(CRF)模型,擷取 CFP 的重要 訊息,Schneider 特別關注文件版面特徵(layout features)的貢獻,發現版面特徵可以提 昇約 30%的 F1 分數(F1 measure)。因為,Schneider 的研究關注於各項特徵的效益,使 用的測試資料僅有 263 篇乾淨無雜訊的 CFP,而避開真實文件各種複雜的情況,因此很 難建構一個實際可行的資訊服務系統。 目前亦有許多學術組織,建構了 Conference Calendar 的相關網頁,希望有利於會議 資訊的流通,但是這種資訊彙整形式的網頁,僅提供瀏覽的功能,沒有進階檢索功能, 使用者仍須耗費相當的精力,才能瀏覽相關的會議資訊。另外,尚有功能比較好的類似 系統,例如 WikiCFP 與 EventSeer 等 CFP 資訊共享服務系統,但是提供的多為電腦科學 相關學術領域的學術會議資訊。WikiCFP(http://www.wikicfp.com/)是使用 Wiki 建構的往往成為各國評鑑國內大學學術成果的計量指標。在這種激烈的學術競爭環境之下,且
學術競爭力被視為國家競爭力的一環,大學教授莫不兢兢業業地、努力地從事學術研究。
學術研究人員掌握學術會議資訊的即時性與確實性,對於其研究工作的進展與研究成果
的發表,是非常重要的。本研究在這樣的背景下,研發學術會議資訊檢索與擷取系統,
希望能夠有效地由充斥浮濫資訊的網際網路,擷取相關的學術會議資訊。
學術研究人員的學術活動是非常多元的,學術資源服務的類型眾多,本研究將著重
於以資訊擷取為基礎的學術會議資訊的檢索與擷取。研究人員的學術活動中很重要的一
項便是「學術研究的出版」,學術的出版有兩個主要的方向,一個是學術會議,另一則
是學術期刊。會議的 Submission 也有時間的期限,協助研究人員掌握這些重要的訊息,自動地由網路擷取學
術會議的時間訊息、空間訊息、與主題訊息,協助研究人員管理時間與空間訊息,將有
很大的助益。若能進一步搭配「行事曆(calendar)」的功能,對於研究人員而言更是事
半功倍的。換言之,一般行事曆功能僅提供使用者新增資訊、更新資訊、刪除資訊,為
了搭配學術研究的出版,行事曆必須有更進階的功能,能夠依據使用者的 profile 搜尋
Call For Paper 與 Call For Submission,填入行事曆,並依據使用者的設定,提供警示 (alert)
的服務。
研討會通知或會議論文投稿須知,一般是透過既有的郵寄目錄發送,或是以網頁文
件的形式發佈,也因此訊息傳播的目標通常局限於特定族群及研究機構。即使使用者自
行利用網頁搜尋工具在網際網路上查找,所取得的資訊可能不完整,或是已錯過參與的
時機。若要提供即時的且整合的研討會相關資訊,蒐集網際網路上與研討會通知相關網
學術會議資訊之擷取及其應用
239
頁的自動機制,是重要的一環。
一般在網路上大量蒐集網頁的方式,通常利用網頁擷取機器人(web crawler)到處
拜訪網站並擷取所有網頁內容。由於 Web Crawler 的建置困難度較高,維護與效能控管
也較為複雜,不當的設計常會佔據網路頻寬資源,或導致被網站封鎖而無法擷取內容。
因此另有一種方式,並不採用傳統的 web crawler 而是修改網頁擷取機制,以適當的關鍵
字與網頁搜尋引擎的整合來蒐集網頁。
若以蒐集研討會議徵稿通告的相關資訊來檢視網頁自動擷取機制,無論是正向或反
向擷取,都會面臨下列兩項議題:(1)網路上傳播的研討會會議資訊經常更新,例如投稿
截止日期的延期、會議地點資訊的更新、或是新加入的 workshop 議程等等,而所蒐集的
研討會會議資訊必需能夠即時反應各項更新資訊。(2)目前雖然將「研討會議通知資訊」
定義為與研討會議相關的訊息通知網頁,但網頁內容通常包含許多與研討會無關的各種
式樣各種規格的其他資訊,例如文字或影音廣告,網站目錄選項,或其他網站連結等,
這也造成在擷取網頁機制建置時,文件相關程度判斷的問題。
本研究基於前述的背景,運用網頁搜尋技術,以及資訊檢索與擷取技術,發展一套
學術會議資訊檢索與擷取的自動程序,並實際建構系統平台,以服務學術研究人員。本
文的結構如下:文獻探討一節將說明資訊擷取的技術,運用於學術會議檢索的情形,相
關資訊服務系統的現況;學術會議資訊蒐集一節討論由網際網路蒐集學術會議資訊的方
法,以及過濾不相關資訊與雜訊的作法;資訊擷取模型之訓練與建置一節探討學術會議
資訊擷取模型的訓練與建立;系統實作與功能一節討論系統實作的方法,以及各項功能;
最後則是簡短的結論。
2. 文獻探討
學術會議資訊之檢索屬於資訊檢索的應用研究,其中牽涉的研究議題眾多,至少有具名
實體的辨識(named entities identification)、分群歸類(clustering and classification)、
陳光華
文件檢索(text retrieval)。然而,若要建置完整的應用系統,則牽涉更多的技術,如時
間與空間資訊的搭配,各種 API 應用元件的整合。本研究嘗試建構學術會議資訊檢索與
擷取系統,首先探討資訊檢索與擷取技術的現況,以及現有檢索系統的發展。限於篇幅,
本文並不嘗試進行全面而完整的相關文獻的探討。
學術會議資訊文件含有許多具名實體,包括會議名稱、會議時間、會議地點、會議
主題、截稿日期等等,已有許多學術論文探討這個研究課題,訊息理解會議(Message
Understanding Conference,簡稱 MUC)是第一個將具名實體的辨識視為一項檢索研究的
評量項目,企圖推動資訊檢索研究社群,投注研究能量,發展更新的技術,提昇具名實
體辨識的績效。(MUC, 2001)訊息理解會議認為不僅僅需要辨識重要的實體,還必須
確認實體之間的關係(relationship),MUC-6 則明確地規範三個層次的資訊擷取的研究
議題:具名實體之辨識、照應詞之解析、樣版資訊之建構。照應詞之解析是串連具名實
體及其對應的照應詞(如代名詞);腳本樣版則是依照預先訂定的樣版,由文件中擷取
相關的資訊填入樣版的欄位。(Grishman & Sundheim, 1996)
CFP 共享系統,資訊來源是依賴使用者提供相關會議資訊;EventSeer (http://eventseer.net/)
是一個 Web 2.0 的網站,企圖建構一個電腦科學研究的社群網站,除了允許登錄使用者
自由發佈學術資訊外,另外運用 Robot 主動搜集網際網路上的 CPF 資訊。
Takada(2008)建構的 ConfShare 資訊服務系統,透過瀏覽器提供學術會議資訊檢
索的服務。Takada 認為研究者為了參加學術會議學習最新的研究成果,或發表本身的研
究成果,都需要蒐集學術會議的相關資訊。蒐集資訊的工作是參加會議不可缺乏的,但
也造成研究者不小的負擔。ConfShare 以使用者(亦即研究者)的角度,提供與學術會議
相關資訊的各種服務,希望能夠減輕前述研究者的額外負擔。
Xin, Li, Tang, and Luo(2008)使用 Constrained Hierarchical CRF(CHCRF)標註學
術會議官方網站的網頁以及屬性,企圖建構一個學術會議的行事曆系統。Xin 等人關注
的是學術會議的官方網站而非 CFP,然而官方網站成立的時間通常都很晚,不像 CFP 的
快速與即時,而且,官方網站的資料是透過下達會議名稱與時間,由 Google 檢索而得,
這樣的假設並非很合理,因為,類似的系統應該是藉由學術研究的主題取得學術會議資
訊,而非藉由特定的會議名稱或是舉辦時間。
本研究企圖建構的學術會議資訊檢索與擷取系統(Academic Conference Information
Retrieval and Extraction System,ACIRES) ,較接近於 Takada(2008)的 ConfShare 系統,
但是在功能面仍有差異,使用的技術亦不相同,涵蓋的學科主題範疇亦有很大的差異。
下文將說明本研究的資訊的蒐集、處理、模型的訓練、以及系統的實作 。
3. 學術會議資訊蒐集
學術會議資訊的檢索與擷取,當然需要被檢索的標的物,必須有一套機制蒐集網路上的
論文徵稿通告,作為系統開發前,資訊擷取模型訓練之用;系統開發完成,正式運轉時,
亦需要這套機制持續蒐集論文徵稿通告,以服務學術研究人員以及一般的使用者。
為了有效地蒐集相關的學術論文徵稿通告,本研究採用目標式網頁擷取(focused
crawling)的概念,先以學門分類表做為各學科主題的查詢關鍵字,利用網頁搜尋引擎蒐
集所需之論文徵稿通告。我們採用澳洲與紐西蘭標準研究分類表(Australian and New
Zealand Standard Research Classification,簡稱 ANZSRC)為主(Pink & Bascand, 2008),
再整合 Wikipedia 提供的學術領域列表以補充新興學科。由於論文徵稿通告不一定會標
示所屬學科領域,以學門分類名稱為查詢關鍵詞所蒐集的論文徵稿通告,可能無法涵蓋
各學科領域所有重要的研討會資訊。因此,可再進一步分析第一批搜集的論文徵稿通告
的研究議題相關詞彙,整合到學科主題關鍵詞列表,形成所謂的 bootstrapped crawling,
讓學術會議資訊的蒐集更為廣泛且完整。表 1 依字母順序,簡要列出部分之主題關鍵詞。
利用前述的主題關鍵詞,透過 Google 搜尋引擎,分別取得查詢結果前五十筆最相關
的網頁,再接續依相關網頁的內容執行一次正向連結查詢(forward link query),一併收
錄該五十筆網頁中超連結所指到的網頁。透過網頁搜尋引擎,可一次性地蒐集大量的相
關網頁,但無法掌控網頁提供的會議資訊是否已過期。再考量研討會資訊的提供,必須
符合即時性與時效性,因此再進一步利用網頁快訊服務(Google Alert),補充最新的研
討會資訊。
網頁快訊服務就是當新的網頁發佈於網際網路時,網頁搜尋引擎比較該新網頁與使
用者預設的 profile 的相關度,若是在搜尋結果的前 20 名內,就會立即以電子郵件通知
快訊訂閱客戶。利用此服務特性,將前述的學科主題關鍵詞,做為取得快訊的搜尋詞彙,
即時取得最新發佈的網頁文件。對於以網頁快訊服務取得的相關網頁,本研究也會進一
步執行一次正向連結查詢。
陳光華
無論是從網頁搜尋引擎或是網頁快訊服務蒐集而得的網路資訊,必定會有重覆的情
形,因此在蒐集網頁時,必須初步過濾重覆的網頁。以網頁搜尋引擎取得的相關網頁,
由於是同一時間取得的網頁內容,因此不需考量網頁更新的因素,直接比對網址過濾重
覆者。以網頁快訊服務取得的新網頁,若網址與現有文件相同,則必須考量網頁更新因
素,先比對兩筆網頁的上次更新時間,再保留更新時間較近的網頁。若無法取得網頁的
上次更新時間,則保留由網頁快訊服務取得的網頁。
由於從網頁搜尋引擎及網頁快訊服務廣泛蒐集的網頁數量龐大,大量的文件中可能
包含與研討會論文徵稿通告無關的網頁,為了提升學術會議資訊自動標註的準確度,必
須篩選無關的網頁文件。本研究運用文件自動分類技術,可以迅速處理大量文件,避免
繁瑣且冗長的人工分類作業,我們採用開放程式碼 Rainbow Classifier 自動過濾非會議徵
稿通告的網頁文件。(McCallum, 1996)由於 Rainbow Classifier 需要一組已分類的文件
做為分類模型所需的訓練文件,此訓練文件將利用人工分類的方式產生,該人工分類的
作業一併整合至人工標註輔助系統,讓標註人員可同時並行訓練文件分類與文件內容標
註工作。
表 1. 部分主題關鍵詞
abnormal psychology
accompanying
accounting scholarship
acoustic engineering
acoustics
CRF 為機器學習式(machine learning-based)演算法,需設定數種資料特徵以訓練模型, 因此以學術會議徵稿通告必備的重要資訊項目,作為資料特徵欄位(如表 2 所示),再 使用一部分學術研討會徵稿通告,做為訓練文件集,先以人工的方式標註特徵欄位,並 利用特殊詞典或地名資料庫標示特定詞彙(例如地名、會議專有名詞等),建立 CRF 學 習樣版,再經由 CRF 自動學習與測試,調整資訊辨識的準確度,以建置資訊擷取的自動 機制。 實驗結果如表 5 所示,Outside Test 意指測試資料與訓練資料不同,Inside Test 意指測試 資料與訓練資料相同。Inside Test 的結果一定會比 Outside Test 的結果好,如果 Outside Test 的結果很接近於 Inside Test,代表分類模型的適應性很好;訓練資料越多,涵蓋面 越廣,分類結果也越好。 實驗結果顯示,SVM 模型的表現最好,Naive Bayes 次之,而 kNN 最差。SVM 在 Inside Test 與 Outside Test 的表現差異最小,而 Naive Bayes 變動的幅度很大,代表 SVM 模型對於未知資料的解釋性很強。除此之外,無論是何種模型,F micro 與 F macro 的表現相 當,代表每一次實驗結果的變異性很小。值得注意的是,本研究是採用 Recall-Oriented 的作法,調整系統參數,進行文件的自動分類,原因是希望能夠儘量取得會議相關的文 件,因此較著重於 Recall。依據前述實驗的結果,本研究發展的系統將採用 SVM 模型, 自動分類大量的網路文件,判定是否為 CFP 文件後,再進一步擷取文件中的會議資訊。 完成人工標註的網頁文件轉換成此特定格式後,將其中四分之三的文件做為訓練文 件集,四分之一做為測試文件集。透過 CRF 以訓練文件的 Token 特性,演算並建構自動 標註模型,再使用測試文件測試自動標註之效果,並依測試結果調校運算參數或調整會 議資訊特徵人工標註規則,以提升自動標註模型的績效。CRF 的實驗結果如表 6 所示, 由於希望加強 Recall,以儘可能地擷取相關的 Entities,以避免遺漏會議資訊,因此表 6 顯示 Recall 相對較高。對於可能造成的誤判,再應用許多 Heuristic Rules 過濾不適當或 是錯誤的訊息,這些 Heuristic Rules 可分為下列五種型式: 序列規則(Sequence Rule) :考量時間資訊的序列性。 詞彙規則(Term Rule) :考量特定的詞彙。 位置規則(Location Rule) :考量具名實體的相對位置。 格式規則(Format Rule) :考量時間資訊的格式。 相似規則(Similarity Rule) :考量具名實體的相似性。 表 6. 具名實體的擷取 ACIRES 持續以 Google Alert 快訊服務,以本研究整理的學科主題關鍵字,訂閱各主題相 關網頁通知,取得最新的學術會議資訊,保持資料的即時性與時效性。由 Google Alert 蒐集而得的網頁,先經由 Rainbow Classifier 的文件分類模型,自動過濾非相關網頁。再 經過去除雜訊的程序,刪除廣告,動態網頁程式等與會議資訊無關的內容。 ACIRES 採用 Lucene 檢索系統整合所蒐集與整理的會議資料。(Apache Software Foundation, 2010)Lucene 為完整的資訊檢索系統,提供全文資料及欄位資料的索引建立 與資料查詢功能。ACIRES 取用已去除雜訊的網頁內容建立全文索引。每一筆會議資料 是由一份網頁全文及多個自動擷取的特徵項目所組成,這些特徵項目也是建立索引資料 庫時,各學術會議資料的欄位索引項目。 為前端使用者系統的入口首 頁,分為時間資訊畫面、檢索功能畫面、分類瀏覽畫面、檢索結果畫面,下文簡要說明 各項功能。 5.2.1 查詢學術會議資訊 系統提供基本的全文檢索功能,以及可指定欄位的進階檢索功能。當使用者進行關鍵字 檢索時,系統查找研討會通告中含有查詢關鍵字的文件,依序列出查詢結果。使用者亦 可進一步利用不同欄位間的布林邏輯進行進階檢索,查找更精確的會議資料。使用者點 選進階檢索的鏈結,系統展現進階檢索的功能畫面,使用者可使用"AND"、"OR"、"NOT" 組合不同欄位,進階檢索提供的檢索欄位,包含所有會議資訊特徵項目,請參見圖 8。 5.2.2 檢視詳細會議資訊 查詢結果清單的每筆會議資訊包含會議名稱、會議日期、會議地點以及查詢關鍵字在文 件中出現的片段。使用者可點選每筆會議資訊的[Detail]按鈕,檢視更詳細的資料。[Detail] 視窗分為二部分,上方是本系統摘錄的會議基本訊息,下方式系統儲存的會議通告文件, 使用者也可以進一步在詳細資料視窗點選原始網頁位址,進入該學術會議官方網站取得 進一步資訊,請參見圖 9。 陳光華 圖 5. ACIRES 系統架構:後端資訊處理系統 圖 6. ACIRES 系統架構:前端使用者系統 圖 7. ACIRES 首頁 圖 8. 進階檢索 圖 9. 查詢結果清單及會議詳細資料學術會議資訊之擷取及其應用
243
4. 資訊擷取模型之訓練與建置
學術會議的論文徵稿通告主要包含會議名稱、會議地點、會議時間、會議主題、會議官
方網站、以及各項截止日期或公佈日期等。論文徵稿通告與一般文件最大的差異在於其
重要資訊不一定是以完整的語意文句組成,可能利用內容配置及排版以突顯各項資訊。
例如,一份論文徵稿通告的會議名稱通常單行置中且前後各有空行,研討會議題以項目
符號逐項表列,各項重要期限或公佈日期通常利用表格呈現。除了排版上的特色之外,
還可利用特定詞彙判斷是否為重要通知資訊,例如會議名稱通常會出現 conference、
international、annual 等詞彙,submission、notification、deadline 等詞彙則經常伴隨日期
出現,另外也可以利用完整的地名詞典擷取會議舉行地點。雖然可利用排版及詞彙兩種
特性設計論文徵稿通告的資訊自動擷取機制,但是網路上或電子郵件提供的論文徵稿通
告,並沒有一致的文件格式,通知項目也沒有統一的名稱,這都增加資訊判斷的困難度。
本研究應用 Conditional Random Field(CRF)建立自動擷取會議資訊的模組,從會
議通告網頁文件,擷取重要的會議資訊欄位(如會議名稱,會議日期,會議地點等)。
CRF 是在機率演算的架構之下,針對某種結構組成的文字資料進行分段(segment)
或是標註(label)的工作,其文字資料結構包含序列式或是矩陣式等。某些機器學習的
演算法必須假設每一個序列資訊都是相互獨立,例如 Hidden Markov Model(HMM),
但是真實世界的序列資料並不是由一連串獨立的資訊組成的。CRF 不同於其他機器學習
演算法,會考量隨機序列資訊的關聯性,以求整體序列的聯合條件機率,以避免詞彙標
註的偏置(bias)問題(Wallach, 2004)。本文並不試圖詳細描述 CRF 的理論與技術,
相關說明請參考(Sutton, Rohanimanesh, & McCallum, 2004; Lafferty, McCallum, & Pereira,
2001)。
244
陳光華
表 2. 徵稿通告之特徵及對應之標籤
中文名稱
英文名稱
HTML 標籤
標籤範例
會議全名 Conference Name
confname
<confname> Multimedia in Ubiquitous
Computing and Security Services</confname>
會議名稱
縮寫
Abbreviation of
Conference Name
confabbr
<confabbr> MUCASS 2008 </confabbr>
會議地點 Conference Location
confloc
<confloc> Hobart, Australia </confloc>
會議日期 Conference Date
confdate
<confdate> October 14-16, 2008 </confdate>
會議網址 Conference Website
confwebsite
<confwebsite>
http://www.sersc.org/MUCASS2008
</confwebsite>
會議主題 Conference Topic
conftopic
<conftopic> Real-time and interactive
multimedia applications </conftopic>
報名截止
日期
Registration Deadline
registdue
<registdue> Registration -15th October, 2007
</registdue>
摘要提交
截止日期
Abstract Submission Due
abstractdue
<abstractdue> Deadline for abstract 11 June
2008 </abstractdue>
摘要錄取
通知日期
Abstract Notification
abstractnotify
<abstractnotify> Acceptance of papers -August
30, 2009 </abstractnotify>
論文提交
截止日期
Paper Submission
Deadline
submissiondue
<submissiondue>February 15 23, 2009 -Paper
submission</submissiondue>
論文錄取
通知日期
Author Notification
authornotify
<authornotify> March 23, 2009 -Author
notification </authornotify>
論文定稿
截止日期
Final Paper Due
finalpaperdue
<finalpaperdue> Camera-ready copies: April 7,
2009 </finalpaperdue>
海報論文
截止日期
Poster Paper Due
posterdue
<posterdue> Poster Paper Submission Deadline
May 15, 2008 </posterdue>
專題提案
截止日期
Workshop Proposals Due
workshopdue
<workshopdue> workshop submissions due :
Sunday, 2 Mar 2008 </workshopdue>
教學提案
截止日期
Tutorial Proposals Due
tutorialdue
<tutorialdue> Tutorial Proposals: June 30, 2003
</tutorialdue>
博士生論
壇投稿截
止日期
Doctoral Consortium Due
doctoraldue
<doctoraldue> Doctoral consortium submissions
due: 6 Apr 2008 </doctoraldue>
整體工作流程如圖 1 所示,包含文件前置處理、分類模型的訓練、CRF 模型的訓練
三項工作。文件前置處理包含去除文件雜訊、標註學術會議資訊、Tokenization 與詞彙特
性標示。
學術會議資訊之擷取及其應用
245
圖 1. 學術會議資訊檢索與擷取自動模型之建置流程
4.1 文件前置處理
4.1.1 去除文件雜訊
由於由網際網路蒐集的文件,通常為 html 的網頁,包含許多各式各樣的資訊,除了該網
頁的主要內容之外,尚有網頁相互連結的資訊,以及網站外部的延伸資訊。有些網頁的
作者為讓網頁更吸引使用者瀏覽,採用了動態網頁或是多媒體的呈現模式,增加處理網
頁內容工作的複雜度。無論在資訊擷取的訓練階段或是正式的應用上,過多與會議資料
無關的雜訊將會影響資訊欄位判斷的精確度,因此必須先去除與網頁內容主體無關的雜
訊,包含廣告,圖片,網站目錄,視覺特效相關程式段落等等。
4.1.2 標註學術會議資訊
建構自動文件分類機制以及自動資訊擷取模型,需要大量的訓練資料,本研究另外建置
類別標註系統(Genre Annotating System,GAS),整合內容標註與文件分類二大功能,
以求內容特徵標註與文件分類標註的一致性與效率。GAS 以瀏覽器為系統平台,為典型
的 Web-Based Application,主要功能分成三部分:候選文件瀏覽、文件分類標註,以及
內容特徵標註。圖 2 為本研究建構之類別標註系統的操作畫面。
前置處理
去除雜訊
Tokenization
人工標註
會議特徵
人工分類
詞彙特性
標示
Google
Search
Google
Alert
文件集
原始網頁
文件集
(CRF 格式)
CRF
Training
CRF
Testing
CRF
Model
調整參數
文件集(已
分類網頁)
Classifier
Training
Classifier
Testing
Classifier
Model
調整參數
246
陳光華
1. 候選文件瀏覽區
圖 2 右上方的功能區塊為候選文件瀏覽區。如前文所述,候選文件是以學門分類表的
學科名稱為關鍵字,經由 Google Search 及 Google Alert 於網路上蒐集與會議論文徵稿
通告相關的網頁文件集合,經由去除雜訊處理之後,自動載入 GAS 系統。標註人員登
入 GAS 後,系統會於候選文件瀏覽區展示由該人員負責標註之文件清單,標註人員也
可以利用左方的查詢功能篩選網頁文件,清單上同時標示每份候選文件的標註狀態及
記錄。
圖 2. GAS -功能畫面
學術會議資訊之擷取及其應用
247
2. 文件分類標註區
文件分類標註區位於圖 2 系統功能畫面中間的狹長矩形區塊。候選網頁文件主要分成
相關與不相關兩類,所謂的相關與不相關,是以該網頁文件是否與會議論文徵稿通告
相關與否,作為判斷的依據。但是,考量有些網頁文件內容資訊太複雜而無法斷定,
也可以暫時不將該網頁歸類,且可以註記無法歸類的原因,作為後續文件分類例外處
理的參考,如圖 3 所示。標註人員從內容特徵標註區可檢視網頁文件,判斷該文件內
容是否是會議論文徵稿通告,若確定是會議論文徵稿通告,才需要進一步針對文件內
容標註各項會議資訊。
3. 內容特徵標註區
內容特徵標註區位於圖 2 的 GAS 系統功能畫面的下方功能區塊。選取候選文件瀏覽區
的任一筆資料,系統會將該網頁文件全文載入內容特徵標註區,內容特徵標註區係以
HTML 模式呈現網頁文件內容。內容特徵標註區上方的功能列,除了提供 「復原動作」 、
「重覆動作」 、 「去除 HTML 標籤」 、及「字串查詢」等功能按鈕之外,最重要的功能是
「樣式」的下拉式選單,此樣式選單列出所有本研究採用的會議資訊特徵,標註人員
於網頁內容中框選特徵資訊後,再選取對應的會議資訊特徵樣式,標註之後,所選取
的特徵資訊會以特定的 HTML 標籤標示。例如會議名稱在 HTML 原始碼中標示為
<confname>會議名稱</confname>,本研究考量的會議資訊特徵與對應的 HTML 標籤請
再次參見表 2。
圖 3. GAS -文件分類標註區
4. Tokenization 與詞彙特性標示
CRF 需切割序列性資料為一連串 Token 後,並賦予各 Token 適當的詞性標示,再依每
個 Token 的特徵向量,計算各 Token 之間的條件機率,以做為建構詞彙辨識模型的依
據。因此去除雜訊後的網頁內容,要再抽取非 HTML 標籤的字串,將字串以單一詞彙
或標點符號為單位,切割成更小的片段為 Token,針對每一個 Token,進一步做一般詞
性標示及專門詞性標示。一般詞性標示包含標點符號,大小寫,數字,日期型態等識
別。專門詞性則包括地名,會議資訊經常使用專門詞彙,例如 conference、congress、
陳光華
association、annual、national 等,本研究採用 GeoNames 地名資料庫為地名辨視依據,
並整理會議資訊經常使用的專門詞彙,用以比對並標示相關詞彙,如表 3 所示。
表 3. 會議資訊使用之專門詞彙列表
專門詞彙類別
詞彙項目
機構名稱
Center, centre, college, department, institute, school, univ., university
組織名稱
Association, consortium, council, group, society
事件名稱
Colloquium, conf., conference, congress, convention, forum, meeting,
round, roundtable, seminar, summit, symposium, table, track, workshop
時間屬性名稱
Annual, autumn, biannual, biennial, European, fall, int., interdisciplinary,
international, joint, national, special, spring, summer, winter
4.2 分類模型的訓練
文件分類的目的是為了預先過濾並非論文徵稿通告的文件,以降低內容自動標註時的負
擔。當系統運轉後,大量的網路文件進入系統時,必須先判斷是否為論文徵稿通告的相
關文件,然後再透過內容特徵擷取功能,擷取所需要的會議資訊。由於目前有許多的開
放程式碼可供使用,以開發文件分類的功能模組,本研究使用 McCallum(1996)的 Bow
Library,開發統計學習為本的文件自動分類功能模組,用以過濾由網路取得的會議通告
文 件 , Rainbow
則 是 基 於
Bow
的 應 用 程 式 , 可 由
http://www.cs.cmu.edu/~mccallum/bow/rainbow/取得。基本上,Rainbow 是利用已知類別
的文件,統計分析各文件特徵並建立分類模型,再依此分類模型對新文件進行自動分類。
在人工標註輔助系統所產生的相關文件集與不相關文件集,是收錄原始網頁文件,而不
是已被人工標註特徵項目的新網頁內容,因為本研究的會議資訊自動擷取系統,是先過
濾非會議通告網頁,才進行資訊擷取程序,因此文件自動分類功能模組,是以原始網頁
做為訓練文件。我們進行大量的訓練與測試,使用 k-Nearest Neighbor (kNN) 、Naive Bayes
(NB)、Support Vector Machine(SVM)三種分類模式,隨機抽取文件進行 20 次的實
驗,使用訓練文件與測試文件比例分別為(7:3)、(5:5)、(3:7),觀察分類績效的
變動情形,以決定系統使用的分類模型。分類結果的優劣是以 Recall (求全率) 與 Precision
(求準率)評量,可以進一步將兩項指標結合為單一的 F1 指標,計算方式說明如下。每
一篇文件皆已有正確的分類標記,在每一次的分類實驗,分類模型會為每一篇自動賦予
其分類標記,可能與正確的分類標記一樣,或是不一樣,因此有四種可能性,如表 3 所
示。
依據表 4 可以計算 Recall (R)、Precision (P)、以及 F1 Measure。
P
TP
TP FP
,
R
TP
TP FN
,
F1
2P R
P R
學術會議資訊之擷取及其應用
249
表 4. 分類結果列聯表
Category i
Expert Assignment
TRUE
FALSE
System
Judgment
TRUE
TP i
FP i
FALSE
FN i
TN i
因為進行了 20 次實驗,可以計算 Micro Recall、Micro Precision、Marco Recall、Macro
Precision,以及對應的 Micro F1 Measure 與 Macro F1 Measure,以觀察每次實驗的變異情
形,計算方式如下所示,其中 n 代表實驗次數。
P
∑ TP
∑
TP FP
,
R
∑ TP
∑
TP FN
P
1
n
TP
TP FP
,
R
1
n
TP
TP FN
4.3 CRF模型的訓練
本研究使用 CRF 模型建構會議資訊擷取的自動程序,由於目前也已有許多現成的開放程
式碼可供使用,決定採用 Kudo(2010)開發的 CRF++套件,以擷取會議論文徵稿通告
的特徵資訊,CRF++可由 http://crfpp.sourceforge.net/取得。吾人可以使用 CRF++開發文
件自動分詞(segmenting)或內容特徵標註(labeling)等序列性資料的應用系統。CRF++
宣稱使用者可以自訂資料特徵,而且計算速度快,僅使用少量的記憶體。由於 CRF++使
用特定文件格式,必須將文件內容切割成一連串的 Token,以表格的形式陳列每一個
Token 的詞彙特性、版面特性以及會議資訊等特徵,無論訓練文件或是測試文件,都必
須依循此特定格式編排。
250
陳光華
表 5. 分類結果績效比較
方法 訓練:測試
Inside/
Outside
P micro
P macro
R micro
R macro
F1 micro
F1 macro
SVM
70%:30%
Outside
Test
75.30
75.34
92.07
92.07
82.84
82.87
Inside
Test
77.94
78.31
92.70
92.70
84.68
84.90
50%:50%
Outside
Test
74.19
74.21
90.36
90.36
81.48
81.49
Inside
Test
76.07
77.09
92.14
92.14
83.34
83.94
30%:70%
Outside
Test
72.90
72.93
89.10
89.10
80.19
80.21
Inside
Test
74.83
76.08
92.85
92.85
82.87
83.63
Naive
Bayes
70%:30%
Outside
Test
78.00
78.07
62.63
62.63
69.48
69.50
Inside
Test
75.29
75.50
95.30
95.30
84.12
84.25
50%:50%
Outside
Test
76.31
76.40
63.02
63.02
69.03
69.07
Inside
Test
75.28
75.59
94.18
94.18
83.68
83.87
30%:70%
Outside
Test
69.76
69.85
95.37
95.37
80.58
80.64
Inside
Test
74.84
75.51
96.33
96.33
84.23
84.66
kNN
70%:30%
Outside
Test
66.97
69.32
58.67
58.67
62.54
63.55
Inside
Test
56.88
57.39
94.73
94.73
71.08
71.48
50%:50%
Outside
Test
65.74
67.77
61.82
61.82
63.72
64.66
Inside
Test
56.14
56.54
95.70
95.70
70.77
71.09
30%:70%
Outside
Test
63.51
67.03
58.67
58.67
60.99
62.57
Inside
Test
57.98
59.23
91.42
91.42
70.96
71.89
Documents
System
True Entities
False Entities
Positive Entities
1632
1079
Negative Entities
261
2785
Recall (R) = 1632/(1632+261)=86.21%; Precision (P) = 1632/(1632+1079)= 60.20%
F1 measure (F1) = (2*P*R)/(P+R)=70.89%
5. 系統實作與功能
為了實作本研究提出的學術資訊自動擷取的機制,並提供學術會議資訊之應用服務,我
們建構學術會議資訊檢索與擷取系統平台(Academic Conference Information Retrieval &
Extraction System,簡稱 ACIRES)。ACIRES 由後端資訊處理系統與前端使用者系統構
成,兩者皆為自動化與即時性之服務,系統架構如圖 4 所示。後端系統蒐集網路上的學
術會議資訊網頁、過濾非相關網頁、擷取會議資訊、並進而建立文件索引,前端系統是
與使用者互動的入口,使用後端系統建構之索引資料,提供使用者各項服務,並與 Google
Calendar 聯繫,建構個人行事曆。以下分別介紹後端資訊處理系統以及前端使用者系統
的各項功能。
252
陳光華
圖 4. ACIRES 整體系統架構
5.1 後端資訊處理系統
後端資訊處理系統主要的工作為文件自動分類、資訊自動標註、以及建立文件索引,請
參考圖 5。後端系統使用 Google Alert 蒐集網路上可能的學術會議資訊、過濾無關的內容、
擷取會議各項時間與地點資訊、建置文件索引資料,分別說明如下。
5.1.1 文件自動分類
5.1.2 資訊自動標註
已去除雜訊的網頁,進一步轉製成特定格式,以本研究建置的 CRF 資訊擷取模型,自動
標註網頁中的會議資訊特徵。系統解析完成標註的文件,一一擷取各項特徵項目,再針
對不同資料格式進一步處理,例如統一日期格式、轉換 HTML 特殊字元等。另外,有些
網頁可能包含一個以上的學術會議資訊,因此同一份文件所擷取的項目會有重覆出現的
狀況,例如有兩個會議時間、有三個會議地點等。系統則依文件排版的先後順序關係,
將特徵項目分組為多筆會議資料。
5.1.3 建立文件索引
透過自動資訊擷取所取得的各項會議資訊,以及研討會通知網頁中未被擷取的其他相關
資訊,都需進一步整合為容易查找的資料集合,以提供快速且簡便的檢索及瀏覽服務。
後端系統
Internet
Focused Crawler
前端系統
Client
索引資料庫
會議資料查詢
個人行事曆
文件自動分類
資訊自動標註
建立文件索引
會議資料瀏覽
Command Flow
Data Flow
5.2 前端使用者系統
如前文所述,前端系統乃是支援使用者各項功能的入口,其架構如圖 6 所示,各項功能
可分為兩大模組:1) 會議資料搜尋;2) 個人行事曆。會議資料搜尋為了滿足使用者檢
視資料的不同需求,實際提供了包括基本檢索、進階檢索、分類瀏覽、時間瀏覽、地點
瀏覽等功能;個人行事曆則是提供行事曆的管理功能。圖 7 254
後端資料庫
前端使用者系統
會議資料搜尋
會議資料
Client
索引資料庫
電子地圖
時間軸
會議
詳細資料
檢索結果
分類
個人行事曆
加入行事
文件處理模型
建立文件索引
文件自動分類
資訊自動標註
Google Alert
候選
原始網頁
去除雜訊
Tokenization
詞彙特性標示
CFP 文件
(CRF 特定格式)
CRF Model
CFP
相關網頁
Classifier
Model
CFP 文件
(已標註)
文件自動分類
資訊自動標註
解析標註文件
擷取會議資訊
全文索引
欄位索引
索引資料庫
學術會議資訊之擷取及其應用
255
Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery. S Chakrabarti, Van Den, M Berg, B Dom, Proceedings of the 8th International World Wide Web Conference. the 8th International World Wide Web Conference31RetrievedChakrabarti, S., van den Berg, M., & Dom, B. (1999). Focused Crawling: A New Approach to Topic-Specific Web Resource Discovery. In Proceedings of the 8th International World Wide Web Conference (Vol. 31, pp. 1623-1640). Retrieved Oct. 1, 2010, from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.43.1111&rep=rep1&type=pdf
Message Understanding Conference-6: A brief History. R Grishman, B Sundheim, Proceedings of the 16th International Conference on Computational Linguistics. the 16th International Conference on Computational LinguisticsRetrievedGrishman, R., & Sundheim, B. (1996). Message Understanding Conference-6: A brief History. In Proceedings of the 16th International Conference on Computational Linguistics (pp. 466-471). Retrieved Oct. 1, 2010 from http://www.aclweb.org/anthology/C/C96/C96-1079.pdf
CRF++: Yet Another CRF Toolkit Version 0.54. Retrieved. T Kudo, Kudo, T. (2010). CRF++: Yet Another CRF Toolkit Version 0.54. Retrieved Jun. 2, 2010 from http://crfpp.sourceforge.net/
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. J Lafferty, A Mccallum, F Pereira, Proceedings of the 18th International Conference on Machine Learning. the 18th International Conference on Machine LearningRetrievedLafferty, J., McCallum, A., & Pereira, F. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of the 18th International Conference on Machine Learning (pp. 282-289). Retrieved Oct. 1, 2010, from http://www.cis.upenn.edu/~pereira/papers/crf.pdf
Combining Information Retrieval with Information Extraction for Efficient Retrieval of Calls for Papers. F Lazarinis, Proceedings of IRSG98. Retrieved. IRSG98. RetrievedLazarinis, F. (1998). Combining Information Retrieval with Information Extraction for Efficient Retrieval of Calls for Papers. In Proceedings of IRSG98. Retrieved Oct. 1, 2010, from http://www.cs.strath.ac.uk/~mdd/research/publications/98lazarinis.pdf
Bow: A Toolkit for Statistical Language Modeling, Text Retrieval, Classification and Clustering. A Mccallum, McCallum, A. (1996). Bow: A Toolkit for Statistical Language Modeling, Text Retrieval, Classification and Clustering. Retrieved Aug. 4, 2009, from http://www.cs.cmu.edu/~mccallum/bow.
Message Understanding Conference Evaluation. Muc, MUC (2001). Message Understanding Conference Evaluation. Retrieved Oct. 1, 2010 from http://www-nlpir.nist.gov/related_projects/muc/
Australian and New Zealand Standard Research Classification (ANZSRC). B Pink, G Bascand, World University RankingsPink, B., & Bascand, G. (2008). Australian and New Zealand Standard Research Classification (ANZSRC). Retrieved Mar. 2, 2010, from http://www.arc.gov.au/pdf/ANZSRC_FOR_codes.pdf QS (2010). World University Rankings. Retrieved Oct. 1, 2010, from http://www.thes.co.uk/worldrankings/
An Evaluation of Layout Features for Information Extraction from Calls for Papers. K.-M Schneider, Proceedings of Lernen, Wissensentdeckung und Adaptivitat. Lernen, Wissensentdeckung und AdaptivitatRetrievedSchneider, K.-M. (2005). An Evaluation of Layout Features for Information Extraction from Calls for Papers. In Proceedings of Lernen, Wissensentdeckung und Adaptivitat (pp. 陳光華 111-116). Retrieved Oct. 1, 2010, from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.87.118&rep=rep1&type=pdf
Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data. C Sutton, K Rohanimanesh, A Mccallum, Proceedings of the 21st International Conference on Machine Learning. Retrieved. the 21st International Conference on Machine Learning. RetrievedSutton, C., Rohanimanesh, K., & McCallum, A. (2004). Dynamic Conditional Random Fields: Factorized Probabilistic Models for Labeling and Segmenting Sequence Data. In Proceedings of the 21st International Conference on Machine Learning. Retrieved Oct. 1, 2010, from http://www.cs.umass.edu/~mccallum/papers/dcrf-icml04.pdf
ConfShare: A Unified Conference Calendar that Assists Researchers in the Tasks for Attending an Academic Conference. T Takada, Journal of Information Processing Society of Japan49Takada, T (2008). ConfShare: A Unified Conference Calendar that Assists Researchers in the Tasks for Attending an Academic Conference. Journal of Information Processing Society of Japan, 49(12), 4093-4104.
Conditional Random Fields: An Introduction. H M Wallach, MS-CIS-04-21University of Pennsylvania. RetrievedTechnical ReportWallach, H. M. (2004). Conditional Random Fields: An Introduction. Technical Report MS-CIS-04-21, University of Pennsylvania. Retrieved Oct. 1, 2010, from http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.64.436&rep=rep1&type=pdf
Academic Conference Homepage Understanding using Constrained Hierarchical Conditional Random Fields. X Xin, J Li, J Tang, Q Kuo, Proceedings of the 17th ACM conference on Information and knowledge management. the 17th ACM conference on Information and knowledge managementXin, X., Li, J., Tang, J., & Kuo, Q. (2008). Academic Conference Homepage Understanding using Constrained Hierarchical Conditional Random Fields. In Proceedings of the 17th ACM conference on Information and knowledge management (pp. 1301-1310).
. 10.1145/1458082.1458254Retrieved Oct. 1, 2010, from http://doi.acm.org/10.1145/1458082.1458254 |
40,801,062 | Study on the Machine Tractable Thesaurus Dictionary of Contemporary Chinese Functional Words for Information Processing and Design Information Terms for Dictionary Entries | [] | Study on the Machine Tractable Thesaurus Dictionary of Contemporary Chinese Functional Words for Information Processing and Design Information Terms for Dictionary Entries
December 2005
信息處理用現代漢語虛詞義類詞典研究和工作單設計
Department of Computer Science and Technology
The State Key Laboratory of Intelligent Technology and System
Tsinghua University
100084BeijingChina
陳群秀
Department of Computer Science and Technology
The State Key Laboratory of Intelligent Technology and System
Tsinghua University
100084BeijingChina
Study on the Machine Tractable Thesaurus Dictionary of Contemporary Chinese Functional Words for Information Processing and Design Information Terms for Dictionary Entries
Computational Linguistics and Chinese Language Processing
104459December 2005[Received April 12, 2005; Revised June 13, 2005; Accepted August 15, 2005]
V 起來;剛剛開始 例如:戰爭剛打起來; 2.2.5 進行體 框架標誌:V 著;正 V 著(呢) ;正在 V 著;在 V 著;V 1 著 V 2 ; V 1 著 V 1 著 V 2 ;V 著 N;V 著點兒!V 著! 例如:他正寫著報告呢;門開著;她笑著說;說著說著哭了;牆上掛著軍用 地圖;聽著點兒!你聽著! 2.2.6 繼續體 框架標誌:V 下去;V 下來; 例如:這種武器還要生產下去;戰士們堅持下來了。 2.2.7 完成體 框架標誌:V 了 1 ;V 過 2 (了 1 ); 例如:吃過飯就去。 2.2.8 結果完成體 框架標誌:V 好;V 完;V 成;V 上;V 上來;V 到;V 著(zháo); V 住;V 掉;V 下;V 下來;V 去:V 了 3 (lou); 例如:穿好軍裝;穿上軍裝;穿完軍裝;抓住敵人;抓著敵人;抓到敵人; 脫下軍裝;脫了 3 軍裝;脫去軍裝;脫掉軍裝; 2.2.9 剛歷體 框架標誌:V…來著; 例如:河對岸剛才打槍來著。 2.2.10 經歷體 框架標誌:V 過 1 ;曾經 V 過 1 ;曾 V 過 1 ; 例如:執行過偵察任務; 2.2.11 試量體 框架標誌:VV;VV…看;V 了 1 V; 例如:把坦克修修;讓他來修修看; 2.2.12 小量體 框架標誌:V 一 V;V 一下;V 了 1 一 V;V 了 1 一下; 例如:稍微把準星調一調; 2.2.13 反復體 框架標誌:V 來 V 去;V 過來 V 過去; 例如:跑來跑去;在天上飛過來飛過去; 2.2.14 多發體 框架標誌:V 1 V 1 V 2 V 2 ; 例如:說說笑笑;蹦蹦跳跳; 2.2.15 已然體 框架標誌:已經 V;已 V;早已 V;(是)N 處 V 的 N;例如:是西單上的車;已經打響;
9.1 引進對象範疇:指說話者對引進對象的表示。 例如,由"給"、"于"、"對"、"為"、"由"、"替"、"向"、"沖"、 "沖著"、"奔"、"奔著"、"當著"等引進的人或物對象。 9.2 來源去向經由範疇:表示時間、地點的來源、去向或經由的範疇。 例如,"打"、"從"、"來自"、"由"、"向"、"打從"、"從打"、"經"、 "奔"、"朝"等。 9.3 依據憑藉範疇:表示說話、做事的依據和憑藉的範疇。 例如,"根據"、"據"、"憑"、"憑藉"等。 9.4 目的範疇:表示說話或做事的目的對象。 例如,"為"、"為了"等。 例如,象"啾啾"、"嘰哩呱啦"、"淅淅瀝瀝"、"哐當"、"嗷嗷"、"吧唧"、 "吧嗒"這樣的象聲詞,還有表示並列名詞或名詞性片語的"和"、"與"、"跟"、 "及"、"以及"等。
信息處理用現代漢語虛詞義類詞典研究和工作單設計 1
1Study on the Machine TractableThesaurus Dictionary of Contemporary Chinese Functional Words for Information Processing and Design Information Terms for Dictionary Entries 陳群秀 * introduces our research on the Machine Tractable Thesaurus Dictionary ofQunxiu Chen
摘 要
目前,世界各國學者都十分重視語言信息處理的知識資源的建設,知識包括詞
彙學知識、句法學知識、語義學知識、語用學知識乃至常識方面的知識,核心
問題是語義學知識。在語義學知識中,詞彙的語義知識是最基本最重要的語義
知識。在過去的 10 年中,我們清華大學和合作者對漢語實詞中主要的語類(例
如:動詞、形容詞、名詞)的詞彙語義知識進行了系統、全面的研究,並且研
製出"現代漢語述語動詞機器詞典"、"現代漢語述語形容詞機器詞典"、
"現代漢語名詞槽關係系統"和"現代漢語語義分類詞典"。但是漢語虛詞的
詞彙語義知識的研究特別是面向信息處理用的虛詞的詞彙語義知識的研究至
今還是一個空白點。本文首先討論了漢語虛詞研究對漢語信息處理的意義。漢
語是孤立語、詞根分析型語言,大多數漢語詞彙本身不能明顯地表達語法意
義,句法手段主要靠虛詞和語序,況且虛詞對表示語態、語氣、時態體貌、能
願情態、句子關係、程度、範圍、引入對象等句子語義和篇章結構有關鍵的作
用,因此漢語虛詞詞彙語義研究對漢語信息處理有著特別重要的意義。其次,
本文介紹了一個信息處理用現代漢語虛詞義類詞典的探索研究,是在漢語的語
1. 前言
中文信息處理的研究和進展,依賴于漢語計算語言學的詞彙學、句法學、語義學、語用
學的研究和進展,核心問題是語義學。語義學是難度最大、起步較晚的一個薄弱環節。
由于漢語缺乏屈折變化,是語義型語言,句法分析對句子的貢獻比英語等要小,因此語
義分析對漢語機器理解尤為重要。目前,自然語言理解、漢語信息處理處于一個關鍵時
期,處在取得重大突破的前夜,最重要最困難的是語義學的研究和突破。
在計算語言學界,越來越多的專家把機器詞典的規模和質量看作是決定一個自然語
言處理系統成敗的關鍵。對於漢語來說,由於缺乏形態變化,漢語的計算機自動分析和
處理相對別的語言要困難得多,尤其需要重視語言知識庫特別是語義知識庫的建設。目
前中文信息處理領域的語言知識庫有一些,主要是實詞的詞法詞典、實詞的語義詞典、
句法規則庫和語料庫,但是至今還沒有一個系統的漢語虛詞詞典。國內外面向人的虛詞
的研究也不少,但面向機器自動處理的虛詞研究卻不多,有的話也是零散的個別的,根
本沒有系統研究。例如著名的 EDR、WordNet、FrameNet、MindNet 等都是概念詞典,
都只有動詞、形容詞、名詞等實詞的概念,都沒有系統地研究虛詞的表達體系。漢語虛
詞的語義知識的研究特別是面向信息處理用的虛詞的詞彙語義知識的研究也至今還是一
個空白點。
清華大學一直重視和致力於中文信息處理領域的基礎研究,在中文信息處理基礎資
源建設方面已經取得了一些成果。在過去的 10 年中,我們對漢語實詞中主要的語類(動
詞、形容詞、名詞)的詞彙語義知識進行了系統、全面的研究,並且研製出現代漢語述
語動詞機器詞典、現代漢語述語形容詞機器詞典、現代漢語名詞槽關係系統和現代漢語
語義分類詞典。但是對現代漢語虛詞的語義知識特別是面向信息處理用的漢語虛詞的詞
彙語義知識一直是我們十分關注而還沒有完成的心願。
漢語是孤立語、詞根分析型語言,大多數漢語詞彙本身不能明顯地表達語法意義,
句法手段主要靠虛詞和語序。漢語裏的虛詞往往可以顯示詞與語、片語與片語以及句子
與句子之間的關係,成為語句組織的脈絡。同時,虛詞對表達語態、時態體貌、語氣、
能願情態、肯定否定、句子關係、程度、範圍、引入對象等句子語義和篇章結構有關鍵
的作用,是篇章知識的主要來源。因此漢語虛詞詞彙語義研究對中文信息處理有著特別
重要的意義。二十多年來,中文信息處理從字處理發展到詞處理進而發展到句處理這個
層面,若要取得新的突破,必須進入篇章處理層面。因為一則句子層面要想作深入處理,
必須要依靠篇章的知識,不然的話,很多語法歧義、語義歧義、指稱、照應等問題無法
解決。二則中文信息處理的一些應用領域(例如機器翻譯、自動文摘、自動問答系統的
源文分析和篇章生成)需要對篇章做出有效的分析。因此對漢語虛詞作系統全面的研究
不僅是有意義的而且是迫切需要的。但是漢語虛詞的個性很強,運用範圍很廣,運用頻
度又高,有的一詞多類兼多義,而且漢語虛詞應用很靈活且缺省現象很嚴重,因此漢語
的虛詞特別是信息處理用虛詞詞典研究具有很大難度。
2. 信息處理用現代漢語虛詞義類詞典(漢語情態表示系統)的初步研究
兩年多來,我們對現代漢語的虛詞做了大量的調研、分析和初步研究。我們研究的虛詞
包括介詞、副詞、連詞、時態助詞、結構助詞、語氣助詞、助動詞、感歎詞、擬聲詞等
詞類。我們對漢語虛詞的研究角度,不僅從虛詞的形式、語法作用角度分析,而且還從
虛詞所表示的語義角度和語用角度分析,目的在于研究一個信息處理用現代漢語虛詞義
類表達,是在漢語的語義層面上研究漢語虛詞的分類的二十世紀的表達類,亦即研究一
個信息處理用現代漢語句子情態表示系統。經過初步研究,我們將漢語虛詞義類分為語
態範疇、時態體貌範疇、語氣範疇、能願情態範疇、肯定否定範疇、(句子)關係範疇、
程度範疇、範圍範疇、對象範疇、其他範疇十大範疇,每個大範疇根據虛詞表達的意義
不同再分為若干中範疇和小範疇。下面將虛詞義類的大、中、小範疇分類例示如下:
1 語態範疇:指說話者選擇主體還是選擇客體作話題。
1.1 主動態
1.2 被動態
形態/准形態標誌:被、讓、由
1.3 使役態
形態/准形態標誌:動詞為"讓、請、使、勸、叫、教…"等使役動詞;
2 時態體貌範疇:指因說話者不同的觀察點而表達的事件的時間進程軸上所處的特定階
段或與時間無關而與動量有關的運動狀態和情貌
2.1 時態範疇
2.1.1 現在時
2.1.2 過去時
2.1.3 將來時
2.2 體貌範疇
2.2.1 預期體 框架標誌:將 V
2.2.2 即始體 框架標誌:即將 V
2.2.3 開始體 框架標誌:V 起來;V 起;V 開;V 上;
例如:戰爭打起來了;奏起軍歌;議論開了;議論上了;
2.2.4 剛始體 框架標誌:剛
賀陽,"漢語'語氣'(Modality)及其標誌簡表",1991。 史有為,羅建林,"漢語`體'及其標誌簡表",1991。 陳群秀,"漢語自然語言理解研究概況、前景及難點討論",1990 年國際中文與東方語言 計算機處理學術會議論文,長沙,1990。3. 信息處理用現代漢語虛詞義類詞典表示信息項目設計
對于機器詞典來說,所表示的信息項目的設立最為重要。我們的信息處理用現代漢語虛
詞義類詞典(現代漢語情態表示系統)信息表示項目設立原則是虛詞的語法形式、語法
意義、語義意義、語用用法相結合,期望為大家提供最精確完整和細緻的信息。我們的
工作單中對每一個虛詞描述信息項目有:詞形、拼音、詞類、義項數目、義項序號、釋
義、情態範疇(包括大範疇、中範疇、小範疇)、表示意義、表示意義的範疇值、形態/
准形態標誌、框架標誌、常用的近義/同義關聯詞語、句例、備註。工作單樣式如下:
詞條序號
製作者
工作單號
詞形
拼音
詞類
義項數目
義項序號
釋義
情態範疇 大範疇
中範疇
小範疇
表示意義
表示意義的範疇值
形態/准形態標誌
框架標誌
常用的近義/同義關聯詞語
句例
備註
其中"拼音"填寫的是該虛詞在該範疇下的拼音讀寫,填寫的拼音必須帶聲調,用
1、2、3、4、5 分別表示漢語的四聲和輕聲。"詞類"填寫的是該虛詞在該拼音和該範
疇下的詞類。"義項數目"指的是該虛詞在該拼音該詞類下共有的義項數目,"義項序
號"指的是該虛詞在該拼音該詞類該義項數目下的第幾個義項。"情態範疇"填寫的是
該虛詞在該拼音該詞類該義項序號下屬于什麼大範疇、什麼中範疇、什麼小範疇。"表
示意義"填寫的是該虛詞表示的語法意義、語義意義,包括語體。此項填寫有助于辨析
該虛詞該義項與該虛詞其他義項的區別。"表示意義的範疇值"表示的是"程式範
4. 現代漢語語義知識庫平台建設的構想
目前我們正在編寫和修改"信息處理用現代漢語虛詞義類詞典"填寫規範,並且正在進
行詞典工作單的試填寫實驗。信息處理用現代漢語虛詞義類詞典的管理軟件系統也初步
設計和實現了,準備在試填寫實驗后進行工作單的正式填寫和錄入校對,以期儘快構建
這個漢語虛詞義類詞典(亦即信息處理用現代漢語句子情態表示系統)。同時,我們也
正在對已構建的"現代漢語述語動詞機器詞典"、"現代漢語述語形容詞機器詞典"、
"現代漢語名詞槽關係系統"、"信息處理用現代漢語語義分類機器詞典"等四個實詞
的語義詞典進行整合集成和機器學習功能的研究。待"信息處理用現代漢語虛詞義類詞
典"構建後,我們準備將這五個實詞、虛詞的語義詞典整合集成一個現代漢語語義知識
庫平台,此平台可為中文信息處理提供豐富、全面、可靠的語義知識支持(包括詞彙語
義知識和句子語義知識),可為現代漢語語言學、語義學研究、對外漢語教學、中小學
語文教學計算機輔助教學提供有力的工具和資源。目前我們正在進行現代漢語述語動詞
機器詞典、現代漢語述語形容詞機器詞典在漢語信息處理方面應用的探索,也正在研究
和設計這些語義詞典在對外漢語教學輔助教學方面的應用軟件。
參考文獻
林杏光,《詞彙語義和計算語言學》。北京:語文出版,1999。
林杏光,《複句與表達》。北京:中國物資出版社,1986。
陳群秀,"信息處理用現代漢語句型系統的初步研究",第二十屆東方語言計算機處理國
際學術會議(20 th ICCPOL' 2003)論文集《Advances in Computation of Oriental
Language》,2003 年 8 月,205-212。北京:清華大學出版社。
申小龍,《漢語句型研究》。海南:海南人民出版社,1989。
陸儉明,馬真,《現代漢語虛詞散論》。北京:北京大學出版社,1985。
疇"、"範圍範疇"等範疇的屬性值。例如,表示程度範疇的副詞有"很、十分、十分、, 十二分、萬分、極、相當、挺、無比",填寫程度範圍值時, 十分"一詞填寫"10", "很"填寫值"10"。"十二分"填寫"12"、萬分填寫"10000"。"相當"填寫 "8"、"挺"填寫"9","極"填寫"1000","無比"填寫"+∞"等。這樣將來機 器理解時可以對它們進行比較和計算。"框架標誌"填寫的是該虛詞在使用環境中可以 多少種模式框架,與什麼類詞性的詞搭配使用等等。"常用的近義/同義關聯詞語"顧名 思義是填寫與之同義或相近意義的關聯詞語或與之成對使用、前後呼應的關聯詞語。"句 例"填寫的是該拼音該詞類該義項序號該範疇下的句子各種用法的真實例子。"備註" 中可填寫在以上各信息項中尚未包括的信息內容或作必要的說明。 朱德熙,《語法問答》。北京:商務印書館,1985。. 疇"、"範圍範疇"等範疇的屬性值。例如,表示程度範疇的副詞有"很、十分、十分、 十二分、萬分、極、相當、挺、無比",填寫程度範圍值時"十分"一詞填寫"10", "很"填寫值"10"。"十二分"填寫"12"、萬分填寫"10000"。"相當"填寫 "8"、"挺"填寫"9","極"填寫"1000","無比"填寫"+∞"等。這樣將來機 器理解時可以對它們進行比較和計算。"框架標誌"填寫的是該虛詞在使用環境中可以 多少種模式框架,與什麼類詞性的詞搭配使用等等。"常用的近義/同義關聯詞語"顧名 思義是填寫與之同義或相近意義的關聯詞語或與之成對使用、前後呼應的關聯詞語。"句 例"填寫的是該拼音該詞類該義項序號該範疇下的句子各種用法的真實例子。"備註" 中可填寫在以上各信息項中尚未包括的信息內容或作必要的說明。 朱德熙,《語法問答》。北京:商務印書館,1985。
. 呂叔湘,《漢語語法分析問題》。北京:商務印書館. 呂叔湘,《漢語語法分析問題》。北京:商務印書館,1979。
胡裕樹主編,《現代漢語(增訂本)》。上海:上海教育出版社,1981。 房玉清,《實用漢語語法》。北京:北京語言學院出版社,1984。 姚殿芳,潘兆明,《實用漢語修辭》。北京:北京大學出版社,1987。. 胡裕樹主編,《現代漢語(增訂本)》。上海:上海教育出版社,1981。 房玉清,《實用漢語語法》。北京:北京語言學院出版社,1984。 姚殿芳,潘兆明,《實用漢語修辭》。北京:北京大學出版社,1987。
. 黃伯榮,廖序東,《現代漢語(下冊, 1980黃伯榮,廖序東,《現代漢語(下冊)》。蘭州:甘肅人民出版社,1980。
,香港: 第四屆漢語辭彙語義學 研討會 (4thCLSW),2003 年 6 月。. 陳群秀,"信息處理用現代漢語虛詞義類研究初步構想, 陳群秀,"信息處理用現代漢語虛詞義類研究初步構想",香港: 第四屆漢語辭彙語義學 研討會 (4thCLSW),2003 年 6 月。 |
|
250,390,699 | Sapphire at SemEval-2022 Task 4: A Patronizing and Condescending Language Detection Model Based on Capsule Networks | This paper introduces the related work and the results of Team Sapphire's system for SemEval-2022 Task 4: Patronizing and Condescending Language Detection. We only participated in subtask 1. The task goal is to judge whether a news text contains PCL. This task can be considered as a task of binary classification of news texts. In this binary classification task, the BERT-base model is adopted as the pre-trained model used to represent textual information in vector form and encode it. Capsule networks is adopted to extract features from the encoded vectors. The official evaluation metric for subtask 1 is the F1 score over the positive class. Finally, our system's submitted prediction results on test set achieved the score of 0.5187. | [
208117506,
3626819,
226976077,
250390607,
1957433
] | Sapphire at SemEval-2022 Task 4: A Patronizing and Condescending Language Detection Model Based on Capsule Networks
July 14-15, 2022
Sihui Li sihui_li@mail.ynu.edu.cn
Xiaobing Zhou zhouxb@ynu.edu.cn
Yunnan University
YunnanP.R. China
Yunnan University
YunnanP.R. China
Sapphire at SemEval-2022 Task 4: A Patronizing and Condescending Language Detection Model Based on Capsule Networks
Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022)
the 16th International Workshop on Semantic Evaluation (SemEval-2022)July 14-15, 2022
This paper introduces the related work and the results of Team Sapphire's system for SemEval-2022 Task 4: Patronizing and Condescending Language Detection. We only participated in subtask 1. The task goal is to judge whether a news text contains PCL. This task can be considered as a task of binary classification of news texts. In this binary classification task, the BERT-base model is adopted as the pre-trained model used to represent textual information in vector form and encode it. Capsule networks is adopted to extract features from the encoded vectors. The official evaluation metric for subtask 1 is the F1 score over the positive class. Finally, our system's submitted prediction results on test set achieved the score of 0.5187.
Introduction
Patronizing and Condescending Language (PCL) can be considered when someone's language has a superior attitude towards others, demeans others, or describes the situation of others in a compassionate way. Such expressions are often unconscious, and are used by people to try to induce action or raise awareness. Because of its subtlety and often well-meaning when used, users often overlook the demeaning elements of this expression. Such elements may contribute to the stereotyped influence of society on a group, making discrimination normalized and even leading to stronger exclusion (Pérez-Almendros et al., 2022).
Detecting PCL in media text is a challenging task. Recognizing PCL based on Natural Language Processing (NLP) can alert speakers to examine the rationality of their speeches, so that speeches can be more inclusive and constructive, which in turn leads to more responsible communication.
When processing corpus, the pre-trained model can convert text information into vector representation, making it more suitable for NLP tasks. Early pre-trained models were designed to learn representational word embeddings, such as Word2Vec (Mikolov et al., 2013) and GloVe (Pennington et al., 2014). Although such methods can capture the semantics of words through word embeddings, they cannot capture the concepts in the context. With the introduction of new technologies, there are now pre-trained models that can learn to represent contextual word embeddings, such as the ELMo (Peters et al., 2018) model based on LSTM (Shi et al., 2015) and the BERT (Devlin et al., 2018) model based on Transformer Encoder (Vaswani et al., 2017).
In recent years, using deep neural networks in NLP, such as Convolutional Neural Networks (CNNs) in text classification (Kim, 2014), has become mainstream. Capsule networks (Sabour et al., 2017), as a structure proposed on the basis of CNNs to improve spatial sensitivity in computer vision, is also used in text classification tasks (Yang et al., 2019;Ding et al., 2019). Kim et al. (2020) further suggest a simple routing method that effectively reduces the computational complexity of dynamic routing.
This task aims to predict whether each news text for each ID contains PCL. Text is represented as a vector and encoded using a pre-trained BERT model. It mainly uses the capsule networks to extract the encoded vector, and uses the output of the fully connected layer to represent the label probability. The rest of the paper is organized as follows: Section 2 introduces the system architecture. Section 3 describes the dataset, implementation details settings and experimental results. The summary and outlook for future work will be presented in Section 4.
System Architecture
In this section, we will introduce the system architecture we use in the task, which will consist of two parts. One is embedding and coding, and the other is feature extraction and prediction. We call the model that is ultimately used to test the class pre-
Embedding and Encoding
When using the early pre-trainied model for text classification, the text is usually represented as a word embedding and then the vector matrix is sent to a bidirectional recurrent network for encoding to improve the system's ability to perceive contextual information. In this paper, we mainly use the BERTbase model to represent text into vector form and encode it.
In order to fit the pre-trained model, we need to preprocess the text in the dataset accordingly. As standard news text, we do only a little text processing on the news text: unify the text to lowercase. Add markers to the beginning and end of the text. For example, when using BERT as the pre-trained model, [CLS] and [SEP] will be used to mark the beginning and end of the text, respectively. Then according to the dictionary information, the words are converted into a list of their position numbers in the dictionary. Collate to get a list that marks the beginning and the end of each sentence.
This part of the work is mainly achieved through the tokenizer attached to the module used when importing the pre-trained model. For the imported model, we set the trainable value of each layer of the model to True.
Feature Extraction and Predict
We use the capsule networks to perform feature extraction on the hidden state of the last layer of the pre-trained model. In the capsule layer, the input is firstly processed by the Conv1d function. The convolution output is treated as a set of capsules, and a new set of capsules of the specified shape is derived through the dynamic routing algorithm. The result is the output of the capsule layer.
The flattened capsule layer output and the text vector corresponding to the first bit in the pretrained model are linked together. In order to improve the generalization ability of the model, dropout is used. During training, the concatenated outputs are first processed by dropout and then fed into the fully connected layer to predict the probability that the news text has PCL. The loss function of this model adopts categorical crossentropy.
Experiment and Result
Dataset and Official Evaluation Metrics
The dataset used in the experiment is provided by SemEval-2022 Task4, Patronizing and Condescending Language Detection. (Pérez-Almendros et al., 2020) In this dataset, the degree of PCL is divided into five levels from 0 to 4. In subtask 1, the level of 0-1 is regarded as a negative example, and 2-4 is regarded as a positive example. Participants were asked to predict the presence or absence of PCL component in the text. The differentiated test set contains 9476 negative labels and 993 positive labels, almost reaching 10:1. Due to the imbalance of samples in the dataset, the F1 score over the positive class was adopted as the official evaluation metric.The formula for F1 score is as following:
F 1 = 2 * precision * recall precision + recall(1)
Precision means the ratio of correctly predicted positive observations to the total predicted positive observations. Recall means the ratio of correctly predicted positive observations to all observations in the real class.
Implementation Details
In terms of data segmentation, we import the train_test_split function from the Scikitlearn (Pedregosa et al., 2011) module to divide the dataset into training set and validation set, set test_size to 0.2, random_state to 35. All experiments in this paper are based on using the TensorFlow2 backend.
When using BERT-base-uncased 1 as the pretrained model, we use the Keras-BERT (Shorten and Khoshgoftaar, 2021) module to implement the Tokenizer and import the model.
We also tried other BERT-based models, such as RoBERTa-base and DeBERTa-base. When implementing Tokenizer and importing models, we use the Transformers (Wolf et al., 2020) module.
The number of capsules, the number of hidden neurons, and the number of iterations of the dynamic routing algorithm are set to 10, 64, and 3, respectively.
The fully connected layer that outputs the final result in each model uses softmax as the activation function. The hyperparameters used are mentioned in Table1 In actual training, in order to alleviate the overfitting situation, ReduceLROnPlateau is introduced. Also set ModelCheckpoint to save each model with the smallest loss on the existing basis.
Experiment and Result
The system uses the dataset provided by the task organizer for training. The BERT-Caps model that finally gets the submitted prediction results is saved at the end of the 8th epoch training.
The results are shown in Table 2. The values of RoBERTa_baseline comes from the result published on the Competition Page 2 . As can be seen from the table, as a result, our model has a greater improvement in precision than the baseline. We also tried to train some BERT-Caps models that reduced the number of capsules in the capsule layer and increased the number of hidden neurons, but there was no significant improvement in metric. Table 2 shows that without the capsule networks, the performance of the model will be greatly reduced compared to the original model. Without dropout, the prediction performance decreases less than without the capsule networks.
We also tried to keep almost the same system architecture, only replacing the pre-trained model and tokenizer. Unexpectedly, in the experimental environment of this paper, both DeBERTa-Caps and RoBERTa-Caps are not as good as BERT-Caps.
The best test set predictions submitted by our team were produced by the BERT-Caps model. Considering with the F1 scores obtained by the top four teams in the English data are all over 0.6400, indeed, there is a gap. Team Sapphire's final ranking is 35th.
Conclusion
This paper describes the experiments conducted by Team Sapphire in subtask 1 of SemEval-2022 Task 4: Patronizing and Condescending Language Detection. We introduced the system architecture, experimental dataset situation and results in Section 2 and 3, respectively. From the experimental results, the BERT-Caps model can achieve better results on the test set. In future work, we will improve our method to achieve better results. For example, using other text representations, and adjusting the weight of the loss function.
Model
diction results of the dataset as BERT-Caps. The architecture of BERT-Caps model is shown in Figure 1.
Figure 1 :
1The architecture of the BERT-Caps model.
.Parameters subtask 1
Epochs
8
Batch_size
8
Max_length 128
Drop_rate
0.25
Optimizer
Adam
Initial lr
1e-5
Table 1: Hyperparameters
Table 2 :
2Results: subtask 1 Jaeyoung Kim, Sion Jang, Eunjeong Park, and Sungchul Choi. 2020. Text classification using capsules. Neurocomputing, 376:214-221. Y. Kim. 2014. Convolutional neural networks for sentence classification. Eprint Arxiv. Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013. Efficient estimation of word representations in vector space. arXiv preprint arXiv:1301.3781. Fabian Pedregosa, Gaël Varoquaux, Alexandre Gramfort, Vincent Michel, Bertrand Thirion, Olivier Grisel, Mathieu Blondel, Peter Prettenhofer, Ron Weiss, Vincent Dubourg, et al. 2011. Scikit-learn: Machine learning in python. the Journal of machine Learning research, 12:2825-2830.Tomas
https://storage.googleapis.com/bert_models/2018_10_18 /uncased_L-12_H-768_A-12.zip
https://sites.google.com/view/pcl-detection-semeval2022/ranking
Jacob Devlin, Ming-Wei Chang, Kenton Lee, Kristina Toutanova, arXiv:1810.04805Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprintJacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. 2018. Bert: Pre-training of deep bidirectional transformers for language understand- ing. arXiv preprint arXiv:1810.04805.
Ynu_dyx at semeval-2019 task 5: A stacked bigru model based on capsule network in detection of hate. Yunxia Ding, Xiaobing Zhou, Xuejie Zhang, Proceedings of the 13th International Workshop on Semantic Evaluation. the 13th International Workshop on Semantic EvaluationYunxia Ding, Xiaobing Zhou, and Xuejie Zhang. 2019. Ynu_dyx at semeval-2019 task 5: A stacked bigru model based on capsule network in detection of hate. In Proceedings of the 13th International Workshop on Semantic Evaluation, pages 535-539.
Glove: Global vectors for word representation. Jeffrey Pennington, Richard Socher, Christopher D Manning, Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP). the 2014 conference on empirical methods in natural language processing (EMNLP)Jeffrey Pennington, Richard Socher, and Christopher D Manning. 2014. Glove: Global vectors for word rep- resentation. In Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP), pages 1532-1543.
Don't Patronize Me! An Annotated Dataset with Patronizing and Condescending Language towards Vulnerable Communities. Carla Pérez-Almendros, Luis Espinosa-Anke, Steven Schockaert, Proceedings of the 28th International Conference on Computational Linguistics. the 28th International Conference on Computational LinguisticsCarla Pérez-Almendros, Luis Espinosa-Anke, and Steven Schockaert. 2020. Don't Patronize Me! An Annotated Dataset with Patronizing and Condescend- ing Language towards Vulnerable Communities. In Proceedings of the 28th International Conference on Computational Linguistics, pages 5891-5902.
SemEval-2022 Task 4: Patronizing and Condescending Language Detection. Carla Pérez-Almendros, Luis Espinosa-Anke, Steven Schockaert, Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). the 16th International Workshop on Semantic Evaluation (SemEval-2022)Association for Computational LinguisticsCarla Pérez-Almendros, Luis Espinosa-Anke, and Steven Schockaert. 2022. SemEval-2022 Task 4: Patronizing and Condescending Language Detection. In Proceedings of the 16th International Workshop on Semantic Evaluation (SemEval-2022). Association for Computational Linguistics.
Deep contextualized word representations. Matthew Peters, M Neumann, M Iyyer, M Gardner, L Zettlemoyer, Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language TechnologiesLong Papers1Matthew Peters, M. Neumann, M. Iyyer, M. Gardner, and L. Zettlemoyer. 2018. Deep contextualized word representations. In Proceedings of the 2018 Confer- ence of the North American Chapter of the Associ- ation for Computational Linguistics: Human Lan- guage Technologies, Volume 1 (Long Papers).
Dynamic routing between capsules. Advances in neural information processing systems. Sara Sabour, Nicholas Frosst, Geoffrey E Hinton, 30Sara Sabour, Nicholas Frosst, and Geoffrey E Hinton. 2017. Dynamic routing between capsules. Advances in neural information processing systems, 30.
Convolutional lstm network: A machine learning approach for precipitation nowcasting. Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Yeung, Wai-Kin Wong, Wang-Chun Woo, Advances in neural information processing systems. 28Xingjian Shi, Zhourong Chen, Hao Wang, Dit-Yan Ye- ung, Wai-Kin Wong, and Wang-chun Woo. 2015. Convolutional lstm network: A machine learning ap- proach for precipitation nowcasting. Advances in neural information processing systems, 28.
Kerasbert: Modeling the keras language. Connor Shorten, M Taghi, Khoshgoftaar, 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA). IEEEConnor Shorten and Taghi M Khoshgoftaar. 2021. Kerasbert: Modeling the keras language. In 2021 20th IEEE International Conference on Machine Learning and Applications (ICMLA), pages 219-226. IEEE.
Attention is all you need. Advances in neural information processing systems. Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, Illia Polosukhin, 30Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is all you need. Advances in neural information processing systems, 30.
Transformers: State-of-the-art natural language processing. Thomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pierric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Clara Patrick Von Platen, Yacine Ma, Julien Jernite, Canwen Plu, Teven Le Xu, Sylvain Scao, Mariama Gugger, Quentin Drame, Alexander M Lhoest, Rush, Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations. the 2020 Conference on Empirical Methods in Natural Language Processing: System DemonstrationsOnlineAssociation for Computational LinguisticsThomas Wolf, Lysandre Debut, Victor Sanh, Julien Chaumond, Clement Delangue, Anthony Moi, Pier- ric Cistac, Tim Rault, Rémi Louf, Morgan Funtowicz, Joe Davison, Sam Shleifer, Patrick von Platen, Clara Ma, Yacine Jernite, Julien Plu, Canwen Xu, Teven Le Scao, Sylvain Gugger, Mariama Drame, Quentin Lhoest, and Alexander M. Rush. 2020. Transform- ers: State-of-the-art natural language processing. In Proceedings of the 2020 Conference on Empirical Methods in Natural Language Processing: System Demonstrations, pages 38-45, Online. Association for Computational Linguistics.
Investigating the transferring capability of capsule networks for text classification. Min Yang, Wei Zhao, Lei Chen, Qiang Qu, Zhou Zhao, Ying Shen, Neural Networks. 118Min Yang, Wei Zhao, Lei Chen, Qiang Qu, Zhou Zhao, and Ying Shen. 2019. Investigating the transferring capability of capsule networks for text classification. Neural Networks, 118:247-261. |